Hacker News new | past | comments | ask | show | jobs | submit login
Command line tools for productive programmers (earthly.dev)
427 points by mooreds on July 29, 2021 | hide | past | favorite | 140 comments



I'm a VIM-only dev, spend all my day in a Tmux session, and I use the command-line probably a lot more than average.

The single best improvement to my command-line workflow I've done in the last few years has been switching shells. First I moved from Bash to ZSH, but about 2 years ago I switched again to Fish.

Why I love it: - history completion searches by default, so I can just type part of a command, hit arrow-up and look through only the relevant results (ZSH also did this, but Fish is better)

- tab completions are shown to you in advance (in light gray in front of your cursor)

- Basic configuration (which is good enough, honestly) can be done via a web browser. Just run "fish_config", and it starts a web server, pops open your browser and any changes you make there are saved to your actual config files. No more looking up syntax.

- aliases (which are called abbreviations) are actually expanded after typing, so the full command appears in your history. You can also edit the full command before running it, if you need.


On a similar note, zsh-history-substring-search has become something I look for everywhere.

https://github.com/zsh-users/zsh-history-substring-search


Can't agree more with this. I too am a (neo)vim + tmux dev, and I made the same progression in shells. Fish just works out of the box. I actually stopped using autojump (see Better CD in the article) because Fish's context-aware auto-completion is that good.


do you have any trouble running shell scripts? I would suppose that the #! line at the top of the script would save you from problems in most cases.

What about bash one liners?

I had a bad experience with Ubuntu's (Debian's) dash shell many years ago and I've (lazily) just stuck with bash since then.


Not OP, but happy fish user too.

For me, yes I still write my scripts in hashbanged (ba)sh scripts; the only fish I write is my config.fish. Fish might be an objectively better language, I don't want to learn it and prefer keeping writing (ba)sh, as

1. (ba)sh is an evil I know...

2. ... that I must know anyway to work on servers (while fish's "utility" is low as used nowhere but around the fish shell)...

3. ... and shellcheck makes it (a bit) less of a footgun.

When I do need to write a bash one-liner, I just run it in bash :) . But it's a once-in-a-blue-moon occurrence, since pipes are the same, and fish added syntax for && and || a few years ago (before that, they didn't exist and you had to use and; and or;), and added support a few months ago for `FOO=bar ...` direct env.var setting (before that you had to `env FOO bar ...`). So, your typical `thing --blah | grep stuff | awk stuff | whatever && something` generally works in fish.


I use fish. My biggest pro-tip is don't chsh to set your shell to fish. Instead configure your terminal program to launch fish.

* do you have any trouble running shell scripts?

Nope. fish will run bash for most shebangs.

* What about bash one liners?

You will have to translate these.


Easy: instead of "./the-script.sh", I run "bash the-script.sh".

You can still type "bash" to get bash.


+1 for fish shell, it really is great. Powerline prompt with it is also super useful.


Quick note: aliases and abbreviations are seperate things in Fish, using the 'alias' function will create a wrapper function that does not expand after typing.


Author here. Thanks for sharing!

Have you seen gron[1]? It can't do everything that jq can, but it is much more in the unix philosophy than jq. It simply flattens and unflattens json out into lines so you can use standard unix tools on it.

   ▶ gron 
   "https://api.github.com/repos/tomnomnom/gron/commits?per_page=1" | fgrep "commit.author"
   json[0].commit.author = {};
   json[0].commit.author.date = "2016-07-02T10:51:21Z";
   json[0].commit.author.email = "mail@tomnomnom.com";
   json[0].commit.author.name = "Tom Hudson";

    ▶ gron "https://api.github.com/repos/tomnomnom/gron/commits? 
   per_page=1" | fgrep "commit.author" | gron --ungron
    [
      {
        "commit": {
          "author": {
            "date": "2016-07-02T10:51:21Z",
            "email": "mail@tomnomnom.com",
            "name": "Tom Hudson"
          }
        }
      }
    ]
[1] https://github.com/tomnomnom/gron


I personally have settled for fx[1], there's no need to learn yet another tool's syntax, specificities, and quirks. js anonymous functions are valid inputs to the tool.

when the transformation turns to be more complex than expected I can just copy and paste what I've made so far into a nodejs script.

you can also configure a .fxrc file to automatically import npm packages that you might find useful, shortcuts, or your personal functions.

[1] https://github.com/antonmedv/fx


That looks really useful, thank you. Plus when you use it, you can chant "GRON. GRON" like you're an Orc about to attack Minas Tirith!


also, have you seen this fzf "trick"?

I am sorry for spamming this comment, but I wanted to share it with all the fzf users because I found it so game-changing:

I use "ESC-c" a lot. It calls fzf and fd, you type a part of your directory name, hit enter, and it cd's to that directory.

    export FZF_ALT_C_COMMAND='fd --type directory'
That's in my .zshenv.

It's incredibly useful if you have multi-layered project directories. Like ~/Projects/Terraform/Cool_Project - you'd hit esc-c, type cool, hit enter and you're in the place you wanted to be.


I use a similar trick but with aliases.

If I want to cd to let's say "~/work/john/some-project" I just type 'fcd' on the console and then "w j s" and "ENTER". Most of the times it work as expected and is really fast.

This is a mode provided by a helper that I wrote for fzf to make it easier to use and more useful. In essence it's just a shell script with a bunch of fzf tricks pre-configured.

https://github.com/danisztls/fzfx/


Very cool! I use zoxide as a alias for cd to do this. But it only really works if you have cded using the full path in the past. But usually I have.


In the same vein I use autojump, and combined with ranger using the ranger-autojump plugin I can CD in any folder I have previously visited.


Oh, I've not seen ranger before. Looks pretty useful.


I definitely know about gron. I maintain the Chocolatey package[1] for it for all us Windows slobs.

[1] https://community.chocolatey.org/packages/gron


It seems so obvious now that someone's done it. Doing things in a Unix-y way often requires imagination; this is a great example of that.


`gron` is super useful for sed/awk manipulation of json data rather than writing a complicated `jq` script. Both are great, though.


I can't sing the praises enough of entr. Just check out its man page: http://eradman.com/entrproject/

entr lets you watch files and re-run a command any time they change. Whenever I'm working on a script, or go tests, or whatever test-like thing I'm doing that's not in its own bloated test harness, I reach for entr. Great software, does what it's supposed to every time.


I used to use entr extensively for Sketch plugin development (the vector editing tool by Bohemian Coding). However the app used to crash randomly and I felt my plugin was causing it. I spent an inordinate amount of time trying to debug it, replacing libraries, changing code and so on. Thankfully I noticed at some point that the app was behaving well for long durations of time when I started it without entr. I got rid of it from my toolchain and things have been well ever since.

I still don't know the root cause, but now I know one more place to look for heisenbugs.


it looks like entr does a bunch of weird, potentially incompatible things, like redirecting stdin from /dev/tty and leaving FDs open in the child. however, my guess is that your build process is non-atomic, and your program is crashing due to necessary files being partially updated. this could probably be hacked around by doing something like `entr /bin/sh -c "sleep 1; exec realcommand"`.


I didn't know of entr, or inotify at the time. Years ago, I wrote a script that did mostly that, but I've found myself to instead more often use a different script to rerun things based on manually triggered global hotkeys. It scratches a different itch, but in case you want to check it out: https://github.com/swarminglogic/shell-scripts/blob/master/r...

In short, you set up a global hotkey to trigger the rerun of a "key"-ed command. Then you can quickly run a command, which can be rerun with that hotkey.

So, a global hotkey set up for `runrerun -r foo`

Then in a terminal you can use `runrerun -b foo -- <COMMAND>`.

It is a bit over-engineered with multiple sessions, different kill signals for restarts, etc. But that's the gist of it.


There is also `watchexec` [1]. However I don't know how it compares to `entr` or other inotify clients.

[1] : https://github.com/watchexec/watchexec


You can do it with inotify too tho , no ?

Great project tho !


'entr' will use inotify where available - the utility is that it's 1) cross-platform and 2) easy to integrate with shell scripting (rather than having to write a more complete inotify or other platform-specific tools).

As an example, I use 'guard' [0] in a docker container, and on test failure it writes out to a file which is shared outside the container. It's easy for me to fire up a shell script that uses 'entr' to watch those files and pop up notifications when tests fail.

[0] https://github.com/guard/guard


Ah niceeeeee



I use a similar tool called reflex: https://github.com/cespare/reflex


> if the directory has many files or sub-directories, tree becomes much less helpful: you only see the last screen full of information as files scroll past you.

> broot solves this problem by being aware of the size of your terminal window and adapting its output to fit it.

What? You can just pipe `tree` into `less`. The solution isn't to download Yet Another Binary that does this specific thing and doesn't come installed on most platforms.


Use broot as tree is not necessary of course `tree | less` works for that but as hinted in the article broot contains a lot more features. Fuzzy finding, preview, multiwindow copy/paste, renaming, directory navigation etc


Does it contain emacs? That would be nice.


What's wrong with dired mode? Ok, you'll get each directory in its own buffer. But, thinking of that, it shouldn't be to hard to make dired mode recurse into subdirectories inside the same buffer. (And fold/unfold like orgmode does. ;-)


It's a single variable setting now:

https://www.manueluberti.eu/emacs/2021/07/14/dired/


dired-subtree does this, albeit on demand, not eagerly descending into subdirectories. dired-hacks more generally has lots of good stuff like filtering etc.


There's a broot alternative called lf that opens files in $EDITOR

https://github.com/gokcehan/lf


What would be nice about that?


You could edit files that you just found and renamed. And when you edit files, then there is RCS integration and… linting, spellcheck, printing, you name it. It’s just a unix way. Is broot ISO/IEC 9945:2009 compliant?

/s


That it would contain emacs


Can broot render on a light background? I haven't found an option for that. If it could, I'd use it.



I generally just use zsh globbing for this by the way:

  ls **(.)
To list all files, recursively, or some variation thereof depending what I want: *(/) for directories, *() for executable files, etc. With a global alias it can be piped to less with LL: "ls *(.) LL". You can actually do a lot of stuff with it, and is basically good replacement for find, except with less typing. The syntax looks a bit daunting at first, but especially for basic stuff it's not that hard (e.g. / and is the same as what ls -F uses).

I wish there was a better pager than less though, but haven't really found it (and no, Rust project X with 115 different colours that don't even work on a light background and/or "fuzzy" matching that never gives me what I want is not it).


Well if we wanna be grumpy this morning... I don't understand why people don't learn how to use `find` properly. And just doing simple things correctly like the `-print0` integration with `xargs -0` to be able to safely send a list of files into rm even if they've got spaces or whatever. I never have much use for `tree` unless I'm trying to take a screenshot and make it look kind of nice.


find and grep are mandatory tools of the Linux toolbox but I use fd and ripgrep most of the time.


[broot author here]

Have you ever tried to pipe `tree` into `less` in a real code directory, with directories containing thousands of files ? That solves no problem.

The idea of broot isn't just to "be aware of the size of your terminal": its view is BFS based to show you a useful overview.

See explanation here: https://dystroy.org/broot/#get-an-overview-of-a-directory-ev...


What's wrong with solving problems the unix guys already solved decades ago, and doing so less elegantly?


Exactly, this is the opposite of the unix way of doing things. Do only one thing and well, be nimble.


> this is the opposite of the unix way of doing things

Which makes sense, this is an article for people on Apple hardware (judging by the installation instructions), so no interest in keeping things "the unix way of doing things"


I have no dog in this fight (I use both Linux and macOS as daily drivers) but it could be argued that macOS has more of a claim to be Unix than Linux since the foundation of macOS comes from NeXTSTEP and FreeBSD whose codebases both ultimately descend from the original Bell Labs Unix, whereas Linux started life completely independently.

I'd argue that Linux is more Unixy than macOS in terms of philosophy though, I remember reading that Dennis Ritchie considered Linux as much of a Unix as the other offerings on the market at the time which were the BSDs and commercial Unix variants; his opinion is obviously worth thousands of mine!


OSx is also Unix certified by the open group, along with hp-ux and aix.

https://www.opengroup.org/openbrand/register/


There is a Linux distro in that list, too: Huawei's EulerOS, based on CentOS.


Brains recently got underrated. The way mcfly works is cool, but it offloads your habits to some “neural network” that you can’t control. There is not many methods slower than looking at unpredictable line-by-line output and almost nothing more anxious than a possibility of false choice done without thinking. This tool basically replaces a deterministic mismatch with a non-deterministic one. Your brain is good at predicting the former (because it is a neural network connected to itself) and bad at the latter, because two neural substances rarely agree.

My guess is that author wanted history-search-{for,back}ward in their .inputrc for up-down keys but missed that somehow. The default history behavior is undoubtedly tedious. https://unix.stackexchange.com/a/20830

The only thing I’d do to history-search- is to show a list of variants (like in wildmode=list:longest or similar) instead of presenting them one by one. Pretty sure it is already possible.


>history-search-{for,back}ward in their .inputrc for up-down keys

Once I found this, I rarely use Ctrl+R since most of the time I'm searching based on starting command name.

If I'm using something a lot of times, it gets added as alias/function. I also maintain a file with commands that I think may be useful later, easier to search that compared to online search.


a file with commands

Yes! I have OBTF with all “devops/etc” stuff that is hard to remember once a quarter, with headers and explanations, and everything in select-paste ready format.


Author here, thanks for reading. I totally agree that non-determinism is tricky. The big thing that got me to try McFly was the suggestions are path specific. I hear fish shell has this feature built in, but I havn't tried it out.

I may be unique in this regard but the types of actions I do in one path are very different from those I do in others: working on my blog is very different from the types of things I do when working on code and it was nice to have history aware of this.

I haven't actually used McFly a lot though, so I haven't quite seen if it going to become a permanent thing for me. The UI of using FZF for history is nicer, I just didn't like the suggested matches.

You are right though, I don't have my arrow keys setup like this. I'll try that out as well. Thanks for the tip!


fzf has a script you can source in your shell that binds to the usual keys and provides much more ergonomic history search


I get a lot of milage out of

    alias h="history | fzy"
    alias f="fd | fzy"


`fzf` can be combined with `find` (or `fd`) and then you can just press Ctrl-T and have a fuzzy searching prompt for all files in and under the current directory. It's super convenient.


This is productivity porn for programmers. While obviously these tools are deep and powerful, there is also a learning curve and experimenting wether you can adapt your workflow to them or vice versa. Next year we will have another 5 shiny command line tools, and so on.

My point being: only replace your weapons of choice once you are confident you are using them to their full potential and you identify needs that these known tools wont solve.


Some of these tools are pretty easy to use. Entr is amazing and can be incorporated into a workflow very easily

Same with fzf. On a Mac, copying specific output from a command becomes so easy. For example `git branch | fzf | pbcopy`.


Your main point doesn't do alternative tools justice. It is totally fine to use a newer tool as a novice so you are learning something that will serve you better in the long run, even if you aren't an expert in some other tool from the start


> Your main point doesn't do alternative tools justice. It is totally fine to use a newer tool as a novice so you are learning something that will serve you better in the long run, even if you aren't an expert in some other tool from the start

If you are a novice, you don't start with alternative tools, because they are much more likely to lose popularity, stop being developed, and lose relevance.


I've historically spent all my days SSH'd into a remote server in a screen/tmux session writing code in vim and doing various admin/db tasks.

Recently I started using vscode with the remote-ssh plugin and it has surprisingly been a pleasant experience. I have an integrated terminal to the remote box and I have full intellisense for coding (I do 80% coding / 20% admin, probably).

It took a gestalt flip for me to realize that with this plugin installed, vscode is basically a very (very) fancy terminal emulator.

Only problems I've encountered so far are if you paste large strings or quickly scroll through your bash history, sometimes characters will drop from the strings. Also, sometimes you have to reload the window because intellisense stops working because it seems like the language server installed on the remote box stops working.


Coming from remote development Emacs TRAMP, vscode + remote-ssh mercifully removed the cognitive load of dealing with Emacs. Not only do you not have to manually install & configure everything, vscode will tell you what you need and give you a working installation automatically.


.. and attach to your organizational affiliation, github activity and checking account for billing, too.. all in one easy install.

Think before you downvote - is this incorrect, really?


I used to keep iterm windows and tabs open but shifted to using pycharm’s integrated terminal.

I use multiple spaces in macOS, with entirely different project contexts. It helped having the labeled terminal tabs coupled to a Pycharm project window.

However, this also let me to realize that the gate commands I was using could be accomplished through pycharms interactive source code management tools.

This helped me realize that there were more powerful things I could do there than in a terminal.

I still find myself using terminal to debug remote environments. But for building, the IDE has taken over over for a lot of my past terminal use.


I think productive programmers would get way further by learning powerful scripting and one liners than picking different pre existing binaries. Its a little oldschool but pipe is the most powerful concept on the command line and it’s wildly productive


Though I agree that new is not necessarily better, and programmers should absolutely try to stratch their own itch with the existing tools (bash, sed, awk and the likes), some new tools are a great improvement and should be granted as such when due.

FZF, for instance, is a major novelty in terms of interface, and it really follows the UNIX philosophy and shows immense composability.


FZF has indeed been a life changer for me: I loved ctrl+r, but fzf enhances it so much, and without much extra/non-desired functions.


I think fzf is still a useful recommendation (better fuzzy matching for Ctrl-R). Or just Ctrl-R for reverse search in bash/zsh is a good thing to know.


Yes, I've found that the most significant improvements for me came with bash aliases, a few little scripts, and nnn for anything that has to do with folder navigation or file browsing. Its search-based navigation speed is unmatched.

Another big upgrade was dmenu. Just slap it in some bash script and now it's interactive. Combined with xbindkeys you can even bind those menus to keys, and it'll work on any Xorg-based desktop.


Pipe is not old-school. Yea, 20+ years ago I learned that feature - and its much older. But! Daily since then I pipe to grep, sed, awk, cut and occasionally perl, php, ruby or now we have JavaScript CLI tools, wow!

Pipe is the new-cool! (always_was.jpg)


Something that i use often,

There are those edge cases where i want to edit and get something out of a unix pipe based oneliner i made however in certain scenarios i find myself having to write complicated code for just a one or two line edits to the piped output

In those i recommend using vipe (its a part of moreutils) it lets you put in your favourite text editor in between your unix pipes , so you can edit things manually from the stdout midway a pipe and let the rest of the pipe take it from there.

Its quite handy , this way i can sometimes use vim’s macro functionality inside my pipe oneliner scripts for when its not worth it to code it with sed,cut,awk or grep.


> However, if the directory has many files or sub-directories, tree becomes much less helpful: you only see the last screen full of information as files scroll past you.

  tree | less


Recently i’ve taken to using nnn for anything like this: https://github.com/jarun/nnn - i never used cli file managers but now, it’s up there with my shell, vim & tmux as a core productivity tool.

Show me files in this dir:

    $ n
Show/hide details for each:

    .
Filter / search file list:

    /
Navigate subdirs (or parents) with vim or arrow keys.

Cd into sub dirs by typing unique chars from dir names:

    ctrl-n
Drop back to shell i started n from but cd into this new dir:

    ctrl-g
Exit n without cd’ing to the new dir i navigated to:

    q
https://github.com/craigjperry2/dotfiles/blob/main/dotfiles/...

Although for deeply nested targets where i already know what im after, i’ll just:

    vim<ctrl+t>
To launch an fzf fuzzy finder on the (.gitignore aware) results of an “fd” (cross platform alternative to gnu and bsd find)


If you’re already in vim, why not use whatever equivalent to NERDTree is popular? Unless you’re only interested in the structure and names, and not the file contents?

It seems like they even have a neovim plugin, you might consider using it.


The integration is pretty decent in vim, i have it configured to open a window overlay on <leader>n (requires neovim) https://github.com/craigjperry2/dotfiles/blob/main/dotfiles/...

That said, i don't find myself using that as much. Usually i'm in the shell when i invoke nnn - i might open a file in vim from nnn though.

In vim, i typically lean on fzf.vim more often - usually i know something about the next file i want to open so it just feels more direct.


I very much like nnn because it's lightweight, but does it have autojump integration[0]? It's the only single reason I am still using ranger.

Edit: That and the speed. And since I am more proficient with Python, hacking on Ranger is easier.

0: https://github.com/fdw/ranger-autojump


Yeah it comes in the base distribution https://github.com/jarun/nnn/blob/master/plugins/autojump

But - if you're happy with ranger, i'm not sure it's worth the switch - they're very similar. nnn is quite a bit faster than ranger but other than that, i think ranger has more community support.


Thanks for reading the article. broot does more than just piping tree to less, it gives you a TUI version of tree that you can navigate along around it. The fuzzy finder in it works like FZF but it keeps everything in a nice compact tree view. It might be overkill but I like it.


There'll always be multiple ways to skin the proverbial cat.

Shameless plug but I'd written my own $SHELL callewd `murex` as I kept running into pain points with Bash as a DevOps engineer. The shell doesn't have `tree` inbuilt but it does have FZF-like navigation built in.

https://github.com/lmorg/murex

I've been using it as my primary shell for a few years now and I'm not going to pretend that it isn't BETA it does work. However it's not POSIX and some of the design decisions might rub people the wrong way (given how opinionated peoples work-flows are). But if you're curious then check it out.


I would suggest https://rubygems.org/gems/ruby-each-line-2

e.g.

    ps -A | grep ruby | grep fsevent | ruby-each-line "puts l.split.first" | xargs kill -9
    cat c | ruby-each-line "puts l.split.first[0..-2]" >> .env.development
    pbpaste | ruby-each-line "puts l.split.last" | ruby-all-lines "puts lines.map(&:strip).join(',')"


I don't want to be That Guy who says his way is best but I can only really see this being useful for someone who knows no shell tools at all.

> ps -A | grep ruby | grep fsevent | ruby-each-line "puts l.split.first" | xargs kill -9

  killall -9 -r ruby.\*fsevent
> cat c | ruby-each-line "puts l.split.first[0..-2]" >> .env.development

  awk '{ print(substr($1, 0, length($1) - 1)) }' c  >> .env.development
or

  awk '{ print $1 }' c | rev | cut -c2- | rev >> .env.development
> pbpaste | ruby-each-line "puts l.split.last" | ruby-all-lines "puts lines.map(&:strip).join(',')"

  pbpaste | awk '{ print $NF }' | paste -sd,
(I couldn't work out exactly what the last two did so I'm not sure my translation is accurate. Even if it's not quite right I doubt the intent would be impossible to express with basic tools.)


That first pkill/killall example is a bit silly, but for the others I can see how a "ruby-each-line" can be useful, especially if you're already familiar with Ruby (although I'd probably name it "el", as that's much less typing). I never really learned awk properly myself either, in spite of being quite familiar with most other shell tools. "ruby-each-line" does more or less the same thing as awk.

After all, there's just one person using your shell: you. So whatever works well for you is a "good" solution.


Yeah, like I said: I can see how it could be useful for someone who knows only Ruby.

> I never really learned awk properly myself either

It's really worth it. It's simple — pattern matches and block, BEGIN/END, hashes, match() and gsub() cover 90% of my uses. I very often find myself writing something like this:

  awk -v id=$id '
    $1 == id {
      go = 1
    }
    {
      if (go && $4 == "b667226") {
        km += $5
        secs += $6
        n++
      }
    }
    END {
      printf("%d rides, %d km (approx %.1f hours)\n", n, km / 1000, secs / 3600)
    }'
This program takes STDIN, reads each line and:

* sets a flag indicating interesting data has been found if the first column matches an ID passes on the command line * if the flag is set and column 4 contains "b667226", add columns 5 and 6 to running totals, and add 1 to a count of matching lines * after all the input has been read, prints out a summary of the data

Of course, any language can do something like this, but awk is succinct, easy to iterate on, and available almost everywhere.


awk just always seemed a little too domain-specific to really invest time in. The number of times I think "gee, I really wish I knew better awk" are few.

This week I've been doing some Lua programming; I had done some Lua several years ago for something else, but found that I had forgotten most things. Even for Ruby I've forgotten quite a lot, yet for two years I programmed Ruby every day for a living. It's just that I haven't done much Ruby since, and when I did some Ruby several months ago I had to look up quite a lot of basic syntax things because I had roughly remembered how it worked but not enough to actually get stuff done in it.

In general, I find that effective practical programming skills are a bit of a "use it or lose it" thing. I don't think I'll use awk enough to not "lose it", even though it's a fairly small language, and most problems it solves can also be solved in other ways.


Fair enough. My job involved a lot of text crunching and simple ad-hoc analysis that suited awk well. “Use it or lose it” is certainly true.


Fzf is one of those tools that instantly makes you shell feel magical to use.

I highly recommend it, and I consider it by far the most useful in this list!


I am sorry for spamming this comment, but I wanted to share it with all the fzf users because I found it so game-changing:

I use "ESC-c" a lot. It calls fzf and fd, you type a part of your directory name, hit enter, and it cd's to that directory.

    export FZF_ALT_C_COMMAND='fd --type directory'
That's in my .zshenv.

It's incredibly useful if you have multi-layered project directories. Like ~/Projects/Terraform/Cool_Project - you'd hit esc-c, type cool, hit enter and you're in the place you wanted to be.


For his usecase of funky i use ~/.bash_aliases, that way i can easily share them between machines. Eg to quickly add a new one: alias als="nvim $HOME/.bash_aliases && source $HOME/.bash_aliases"

Be careful with quoting though, got myself into a situation where every new terminal asked for the root password. Shellcheck found the reason quickly.

(edit) Another benefit is that those aliases abstract over differences of invocation on different distros. Here are some examples:

# PACKAGE MANAGEMENT

alias pcl="sudo zypper cc && sudo zypper purge-kernels"

alias pin="sudo zypper install -y"

alias pla="zypper search"

alias pli="zypper packages --installed-only | rg "

alias plu="zypper list-updates && zypper list-patches"

alias pre="sudo zypper remove"

alias pup="far too long" # upgrade all packages and ask for confirmation

alias pups="far too long + shutdown" # upgrade all packages without confirmation && shut down

# QUICK EDITS

alias als="nvim ~/.alias && source ~/.alias"

alias brc="nvim ~/.bashrc && source ~/.bashrc"

alias egr="sudo nvim /etc/default/grub && sudo update-grub"

alias fst="sudo nvim /etc/fstab"

alias pro="nvim ~/.profile && source ~/.profile"

# QUICK MOVEMENTS

alias ..="cd .."

alias ...="cd ../.."

alias ....="cd ../../.."

alias sc="shellcheck"

# SYSTEMD

alias sden="sudo systemctl enable --now"

alias sdls="sudo systemctl --type=service"

alias sds="sudo systemctl start"

# VIM

alias nrc="cd ~/.config/nvim/ && nvim init.lua"

alias nv="nvim"

alias scratch="nvim scratch.txt"


Instead of creating aliases for .., I use this:

  function ..() { 
    for i in $(seq 1 $1); do cd ..; done 
  }
which enables

  ..    # up 1 dir
  .. 2  # up 2 dirs
  .. 42 # up 42 dirs
Here are some more: https://pilabor.com/blog/2021/03/unix-shell-tricks/


With Zsh, I am using:

    # Expand ... to ../..
    function vbe-expand-dot-to-parent-directory-path() {
      case $LBUFFER in
        (./..|* ./..) LBUFFER+='.' ;; # In Go: "go list ./..."
        (..|*[ /=]..) LBUFFER+='/..' ;;
        (*) LBUFFER+='.' ;;
      esac
    }
    zle -N vbe-expand-dot-to-parent-directory-path
    bindkey "." vbe-expand-dot-to-parent-directory-path
    bindkey -M isearch "." self-insert


Wow, this is complex... but interesting, thank you :-) I use zsh, too.


This might be a better alternate to preserve functionality of `cd -`

  function ..() {
    cd $(printf '../%.0s' $(seq 1 $1))
  }


Mmh, what do you mean with "preserve"...? `cd -` works like a charm for me.


"cd -" changes to the previous working directory. Your function calls cd multiple times, so calling "cd -" won't bring the user to the previous working directory they expect. ozym4nd145's function only calls cd once.


Aah, now I see. Awesome, I'll try that out. Thank you.


I made a slightly different thing:

If you are on

  /home/user/projects/app/src/handlers/
and you want to go to `app/`, you'll do:

  > up src
Or give name of any other child folder/file of `app/`. So if `.git` exists in `app/`,

  > up .git
would work too.

I made it for Fish shell: https://gist.github.com/ajitid/81a4993be410586c038f8b3fc140b...


Clever. Never seen that idea before.


That's amazing.


I used to alias .. etc., but then found that Fish shell has helpful commands and behaviours for directory navigation built in:

> Note that the shell will attempt to change directory without requiring cd if the name of a directory is provided (starting with ., / or ~, or ending with /).

> https://fishshell.com/docs/current/cmds/cd.html

So these all work without the cd by default with Fish:

../

../../

~

~/path/to/folder

cdh is also very useful. It shows a history of recent directories with a keyed prompt to jump to a directory.

https://fishshell.com/docs/current/cmds/cdh.html


I’m also a heavy user of aliases and can’t see how I’d use funky. Behavior dependent on a current directory is rarely a good idea, and when it is, you can just create an explicit shell ./script


Funny, never occurred to me to make shortcuts out of .., I like that


You can put sudo in an alias!? [mindblown.gif]


The funky tool sounds like a potential security issue, especially if you clone repos and then enter them without checking the contents first. I suppose it could be made safer by requiring review when adding a new directory or the settings for a directory changing.


Earthly itself looks pretty useful too: https://earthly.dev/


I'd love a runonce tool: run a script or tool just once and forbid further executions. Useful for ~/.xsession and such. Also, stuff you'd like:

-sox, audio swiss knife

-ffmpeg, ditto for video

-ncdu, check disk usage

-rclone, mount and rsync everything

-f3, check and forbid overwriting those fake USB drives

-udfclient UDF, but better as it eases compabitibility. Ditch fat32/NTFS for computer media sharing

-trickle For network/web programmers, it can force really slow connections on software, such as 2G/ISDN speeds and even below. Perfect to test bad conditions

-pdftotext -layout It dumps a whole PDF into a text file. Useful for copy/paste or adapting the format to anything else - unzip -qcp "$EBOOK_BOOK" "ml" ".htm" | lynx -force-html -dump -stdin -nolist > book.txt EPUB to utf8 dumper.

Trivially adaptable to be used with less, for example. Use "$1" instead of "$EPUB_BOOK" to use it on a script.


I prefer the "git wip" custom command than gitupdate

  git config --global alias.wip "\!git add --all; git commit -a -m \"wip: update\""
  git config --global alias.wtc "\!git add --all; git commit -a -m \"[WTC] \$(curl -s whatthecommit.com/index.txt)\""
I don't like the idea to generate the list of modified files in commit messages, it's not very readable to me plus that I could just generate such info with git log. I tried to write a custom git-wip script to include output of "git status --porcelain=v1" but turns out it's just not necessary since "git log --name-status" could already show it

  git log --name-status
  # or get modified history of a specified file
  git log -p path/to/file.ext


Some interesting tips on this list (and in the comments). It’s definitely worth taking a step back every now and again and investing a little in picking up some newer tools for better workflows.

I recently switched to the latest neovim and redid my config from scratch using more modern plugins. Took a while to understand the ecosystem and figure out how to get things working together nicely. But now, between telescope, ripgrep, trouble and better overall LSP etc, I’ve gone from fast to faster.

The balance of payoff probably still isn’t there yet, but it’s made me want to code more.


I personally love `rg` as a handy grep replacement, and `jq` as I often have to format/prettyprint JSON.


Mcfly - history search

Fish shell has this built in. The search history (by pressing up) depends on your current directory. It's very useful.


If you are a Linux neophyte like me, Midnight Commander[0] can be very useful for learning directory structure and basic commands. After starting it (`mc`), you can quickly toggle its terminal UI on and off with `ctrl-o`.

0. https://midnight-commander.org/


also about mitmproxy, it opens a whole new world of possibilities.

Basically to change anything about any request sent or response received.


I really wish someone made an in depth tutorial for nnn. Have the feeling I am only just scratching the surface using it.


One of the most productive tools that I have is being able to search command history for commands typed in the current directory. It becomes a knowledge database. I rolled up my one, but I haven't seen any tool doing it, which surprises me. Perhaps the McFly tool mentioned here.


All of this stuff is readily available in Emacs. Notably, helm, magit, and dired.


Since no one mentioned it, hstr[1] is a game changer for me.

[1]: https://github.com/dvorka/hstr


I use "ESC-c" a lot. It calls fzf and fd, you type a part of your directory name, hit enter, and it cd's to that directory.

    export FZF_ALT_C_COMMAND='fd --type directory'
That's in my .zshenv.

It's incredibly useful if you have multi-layered project directories. Like ~/Projects/Terraform/Cool_Project - you'd hit esc-c, type cool, hit enter and you're in the place you wanted to be.


6 tools:

- broot - like tree

- funky - like direnv

- fzf - fuzzy finder

- mcfly - cli completion

- zoxide - like cd

- gitupdate - auto commit and push

The author references this discussion https://lobste.rs/s/yfgwjr/what_interesting_command_line_too...


I just read the description of funky, and it is like Windows 98 CD-ROM autorun.exe, but the next level. If you clone my repo and enter the directory - I own your system. Funky indeed.


I'd also add ncdu - which is really great when you need to easily see what's eating up disk space.

jq is also super nice when you have to parse json data.

I also like powerlevel10k: https://github.com/romkatv/powerlevel10k


If you like/use jq and work with html, yq is it’s counterpart. Dyff is in a similar realm.


Do you have any experience with pup to compare with yq? That's the jq look-alike for HTML I first stumbled upon.


Or my Xidel to work with html and json with XPath


powerlevel10k looks interesting with that builtin configuration wizard. I use https://starship.rs

You have to configure it yourself, but it works on bash, fish and zsh.


jq is great for a quick pretty print of json too, just

<run some command that outputs json> | jq .

and you'll get nice color pretty printed output. 90% of the time I use jq it's just to pretty print like this.


somecommand | python -m json.tool

installed everywhere


I think https://github.com/dundee/gdu is faster than ncdu


Off topic hijack - I'm trying to find a tool I feel was probably mentioned here at one point - a REPL for shell.

If I recall correctly, you would open up the repl and start typing a command, and the output would refresh as you typed. Does this ring a bell for anyone?



Yes! Thank you!


One tool that I don't think gets enough love is the "git-fixup" add-on. It looks at your commit log and recommends relevant commits to fix-up based on your staged files.

If you are using fixup commits in your PR review process it's essential.


the silver searcher is a must have


I think ripgrep is better, in most or maybe all ways?


It is definitely quite a bit faster.


I use ack


The one cli tool that saves me the most time is "gh", and its predecessor "hub". Not having to interact with a website to submit a PR, issue, or just fork a repo, is a blessing. I wish more sites would ship a cannonicla tool to use their APIs...


Re: forking a repo.

How do you type name into the terminal?


I am not sure I understand your question. Can you rephrase?

Forking with gh is pretty simple:

Clone the repo, cd to it, "gh repo fork"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: