Your website recommends using `nix-env` to install tools. This hasn’t been the recommended way for a while. It should either be installed declaratively, used in a shell without installing, or be installed using `nix profile`.
Oh right, we didn't know this, most of the tools install guides have nix-env like fd, lychee, gtrash, etc and others we listed which we just use for installation.
Eternal Terminal `et` when we worked from an office where our connection would drop regularly was a life saver. It's like Mosh but less opinionated and doesn't interfere with scrollback.
Probably goes without saying, but for anyone who doesn't know about it, `jq` is life changing, was kind of surprised not to see it. It's a sort of query language for querying JSON blobs. I use it almost every single day. It's indispensable.
I have never heard of “jq”. Oh my goodness. Your comment may have just changed my life. I cannot emphasize enough how many times I have needed a tool like this (and, yes, shame on me for not making a better effort to find one). Thank you!
Yeap the syntax and semantics is quite different to other languages and it really took me a long time and deep understanding to really appreciate how expressive and damn well designed it is.
Coincidentally, yesterday I decided I needed a JSON TUI and landed on fx (https://github.com/antonmedv/fx), which seems to have come out of the Wave terminal project and looks quite similar to jless. Also uses vim keybindings. I like it so far.
powerful rules functionality to recursively search directories for sensitive information in files.
At it's core, Pillager is designed to assist you in determining if a system is affected by common sources of credential leakage as documented by the MITRE ATT&CK framework.
Good for catching those Oops I deployed the company password list again SNAFU's.
You’re likely running on an old version of MacOS that isn’t able to use the precompiled binaries. So, brew is installing all the dependencies necessary to build eza from scratch.
Your Homebrew may not be configured to pull only the runtime dependencies; as others in this thread have mentioned, it's pulling in all those dependencies becauase it's building "eza" (or something, perhaps one of "eza"'s few transitives) from source, which brings in quite the list, including openjdk as you saw.
Homebrew can accidentally end up configured to do this in a number of ways. Some of these may no longer be issues; this list is from memory and should be taken with a grain of salt:
- You might be running an outdated homebrew.
- You might have homebrew checked out as a git checkout, thus missing "brew update" abilities. "brew doctor" will report on this.
- You might have "inherited" your Homebrew install from a prior Mac (e.g. via disk clone or Time Machine), or from the brief transitional period where Homebrew was x86-via-Rosetta on ARM macs, thus leaving your brew in a situation where it can't find prebuilt packages ("bottles") for what it observes as a hybrid/unique platform. Tools, including your shell, which install Homebrew for you might install it as the wrong (rosetta-emulated) architecture if any process-spawning part of the tool is an x86-only binary. More details on a similar situation I found myself in are here: https://blog.zacbentley.com/post/dtrace-macos/
- (I'm pretty sure most issues in this area have been fixed, but) you might have an old or "inherited" XCode or XCode CLT installation. These, too, can propagate from backups. Removing Homebrew, uninstalling/reinstalling XCode/CLT, and reinstalling Homebrew can help with this.
- The HOMEBREW_ARCH, HOMEBREW_ARTIFACT_DOMAIN, HOMEBREW_BOTTLE_DOMAIN, or other environment variables may be set in your shell such that Homebrew either thinks the platform doesn't have bottles available or it shouldn't download them: https://docs.brew.sh/Manpage#environment
- Perhaps obvious, but your "brew" command might be invoked such that it always builds from source, e.g. via a shell alias.
- Homebrew may be unable to access the bottle repository (https://ghcr.io/v2/homebrew/core/), either due to a network/firewall issue or a temporary outage.
Noob friendly homebrew seems like such a great idea. I especially want just one strategy which spans both utils and apps (casks). Versus cobbling together Apple App Store, SetApp, and homebrew.
Those GUIs would be even more useful if they spotted and explained the config issues you listed. (I have no idea if "brew doctor" suffices.)
You’re welcome! One more issue that I missed calling out: a bottle may not yet be available for your platform (Sequoia) as it is very new. In that case, patience.
Its my understanding that these good folks have moved away entirely from their hosted stuff. In the context of glos this was the "stash" feature, removed with v2 release.
Do the creators/maintainers of these tools ever try to get their improvements merged into the tools they aim to replace? And does it ever happen? For a while I've heard about things like ripgrep and such that seem to be so much better and faster than grep, so why wouldn't those kinds of improvements get brought into grep?
(Note: I'm not asking this from a "down with the old ways!" perspective, but just out of curiosity. I assume there's a reason people are making separate tools instead of improving the existing ones, I just don't know what that reason is.)
One of the biggest improvements reported by my users is the smart filtering enabled by default in ripgrep. That can't be contributed back to GNU grep.
Also, people have tried to make GNU grep uses multiple threads. As far as I know, none of those attempts have led to a merged patch.
There are a boatload of other reasons to be honest as well.
And there's no reason why I specifically need to do it. I've written extensively on how and why ripgrep is fast, all the way down to details in the regex engine. There is no mystery here, and anyone can take these techniques and try to port them to another grep.
I've written some tools like this; some released, some not (mostly because unfinished). The problem with "merging it upstream" is that it's just a lot of friction. There are also some ideas I have and having my own tool gives me the freedom to explore and experiment without worrying too much what other people think or compatibility.
And then there's the language choice, as well as code quality. I don't really want to start a huge discussion about this, but it should be pretty obvious that many people are more comfortable and productive using languages that are not C, and that some of these tools don't have the best C code you can find.
A lot of "Rewrite in Rust" (or Go, Python, what-have-you) isn't really about the "Rewrite in Rust" as such, but rather about "Rewrite so I can play around with ideas I have".
Some algorithm improvements ripgrep uses could be brought into grep, but ripgrep at its core just operates differently. It uses threads by default, assumes unicode, has a completely different regex engine, amongst other things. It could also probably be argued that some things from ripgrep would be pretty difficult to port from Rust to C or C++ safely.
All of the behind the scenes improvements aside, ripgrep just has significantly better defaults. Recursiveness, ignore binaries, parsing .gitignore, etc. I do not think it would be possible to get grep to alter these configurations. It would break decades of ossified scripts.
Ripgrep is written in Rust while the original grep in C. So it doesn't make sense to try to combine them. Ripgrep is rather the modern rewrite, and we keep the old grep around for backwards compatibility.
I used to use ranger, but have since switched to yazi[1] for speed and out of the box image support. (Ranger can do the same, but I think you have to set the preview_images_method[2]).
It’s a rewrite of restic in rust, but has a few more quality of life features and a config file for setting params (instead of restic only having cli flags).
It’s what changed my backups from something I’d poke at every few days to completely on autopilot.
The problem (for me) with using non standard CLI is that whenever I'm using some other computer (i.e. a VM I spun up, server I'm ssh'd into, etc) said custom tools are no longer available unless I go out of my way to install them and I have to fall back on standard coreutils. So for me it only seems worth custom CLI tooling if you are relatively stable in your work environment.
A vast and interesting collection of CLI that can then bootstrap lots of other programs / functions in a consistent and structured way (X bootstrap 1000+ tools and your scripts)
It’s a benchmarking CLI tool that can be used as an alternative to `time`. I often use it to detect flacky tests, I run something like `hyperfine —show-output -n=100 'go test ./… -count=1' and it helps me catch tests that fail unreliably
I didn't like frecency in this and similar tools. I would often get put in directories that I didn't want. I wrote my own simple script that just uses recency, and if there's multiple possible matches you get to choose which one you want (though this is configurable).
> I would often get put in directories that I didn't want.
I solved this by combining it with fzf. Get all the directories you've ever visited and pass on to fzf (sorted by frequency). Then do your matching. You can trivially see if the match is taking you where you want. If not it is likely the second or third match. You're no longer constrained to navigating only to the top matched directory.
I adore it. `z <project I'm working on>` is my brain's hardwired shortcut to get back to what I was doing.
Pair it with dotenv to automatically set my my shell environment for that project, whatever it is I'm doing at the moment, and it's sooo ergonomic to bounce around between tasks.
I used and loved z for years but migrated to zsh-z (https://github.com/agkozak/zsh-z) when MacOS switched the default shell and it became apparent that z wouldn’t be compatible with it.
Anyone have a view on whether I should switch from zsh-z (~2k gh stars) to Zoxide (~22k stars)?
It's a slowly developing trend, but I also wish that a --json output flag was a part of all cli utility output.
Tldr sounds interesting. Man pages are awful for quick reference. At this point it should be possible to collect the statistically ranked most common example usages of commands and provide them, especially if there are very very common associated commands that are piped with them.
I tried out tldr a few years back and in practice tldr never seemed to have what I want in it
now, for the same use case, I search for the man page on Kagi, use the LLM "ask this page questions" feature to ask the man page how to do what I want, and then ctrl-f with the flags it outputs and read the man page entries for those to ensure no hallucinations.
https://terminaltrove.com/new/
https://terminaltrove.com/tool-of-the-week/
Every tool added has images/gifs and a quick way to install it.
We love this list and sponsored the development of fd which we heavily use ourselves!