Also while inside less: & shows you only lines which match a pattern. You can hit & multiple times and less will show you only those lines which match every input pattern. A ^N right after & negates the pattern. & respects the -I switch (case insensitive pattern matching).
I use this all the time, especially when I'm on a machine that I don't want to bother installing something like fzf on.
Seems to depend on your particular less, see the last sentence of this paragraph from `man less` on my machine:
&pattern
Display only lines which match the pattern; lines which do not
match the pattern are not displayed. If pattern is empty (if
you type & immediately followed by ENTER), any filtering is
turned off, and all lines are displayed. While filtering is in
effect, an ampersand is displayed at the beginning of the
prompt, as a reminder that some lines in the file may be
hidden. Multiple & commands may be entered, in which case only
lines which match all of the patterns will be displayed.
Indeed, I thought it would behave like you describe.. when I was refreshing my memory of how the negative pattern filtering worked, I first did &/pattern and then &/^Npattern, and was surprised to see that it displayed zero matching lines.
this is my version of less:
% less --version
less 581.2 (POSIX regular expressions)
Copyright (C) 1984-2021 Mark Nudelman
That looks awesome. Something I'd love help with, I use pdsh to tail logs from multiple servers at once, but the ways I can manipulate the logs feel really limited because of how pdsh works. Does anyone know of a better solution for that? Like, from a head node, aggregate/tail the contents of the same log file on multiple servers. Bonus points if it uses 'genders' too to get the list of servers.
As for integrating with a “genders” host DB, there’s no direct support for it. But, lnav is scriptable, so I’m pretty sure it’s possible to write a script that does what you want. I can help with that if post in the github discussion: https://github.com/tstack/lnav/discussions
I've been using tmux for finding, but I think this might be brilliant and stupid simple. Thank you for this! Never thought of piping "live" output or streaming to fzf!!
tail -f will hang around and output new lines that appear in the file.
fzf will receive that input and in this case accumulate the new lines that have come in from tail -f. It will then stay around, present an interactive full screen text ui, and let you interactively filter the set of received lines based on substrings you enter at the fzf prompt. This while continuously incorporating new log lines coming from the pipe, from tail -f. The +s just says not to sort the lines, keeping the matches in the same order they appeared in the logs.
It just takes the last ten lines of a log file, then passes that input to a program that basically lets you search each line. I don't really see the usefulness.
Edit: Ah, I see. According to another comment you see the output of the file as it changes.
> automatically have this for any command that outputs more than 10 lines?
Probably not. The output of a command typically goes directly to the terminal and does not pass through the shell, so the shell has no idea how many lines there are.
You could write a shell where that's not the case, but that would have issues with interactive things - what happens if you run e.g. vim or htop in that context?
You can pipe to `less -F` (`--quit-if-one-screen`), but note that the version of `less` shipped with macOS has a bug and might just swallow the output instead.
Maybe with some clever use of zsh hooks? Here's a gist for zbell; it uses `preexec` and `precmd` to run something right before a command is executed (`preexec`) and right before the prompt is shown again (`precmd`)
Then if you run "git fza" you'll get a list of changed files in your repo, which you can use TAB to select/deselect. Hit enter and the selected files will be staged, ready for a commit. Extra cool is that it works from any subdirectory of your repo because it always lists files from the root of the repository. It's really useful for selectively adding lots of files from the command line.
I then alias "ga" to "git fza" for even less typing :)
It pipes a list of available git branches into fzf, which lets you filter them by fuzzy string match. When you hit enter, it will take the best matching one and switch to that branch
I really like the approach of pipe-rename (https://github.com/marcusbuffett/pipe-rename) for renaming many files. It's especially convenient for people who can efficiently edit many lines in their favorite text editor (so everybody here, I guess).
Main problem: you maybe don't do this kind of operation frequently enough to remember how it's called or how you aliased it.
Given that I'm guaranteed to forget that I have this thing installed if I would install it, I'm just going to steal the idea and try to remember to use my editor instead of an ugly shell loop next time I want to rename many files. There's only a small difference, really, between transforming `1.txt 2.txt 3.txt` to `file1.md file2.md file3.md` and transforming to `mv 1.txt file1.md; mv 2.txt file2.md; mv 3.txt file3.md`.
You likely already know this, but just a reminder/tip that bash and zsh will let you edit the current command you’re entering in your editor of choice if you press ctrl+x ctrl+e (without releasing ctrl between). (Note if you’re using zsh without ohmyzsh, you’ll have to enable this manually [1])
Some time ago I have in fact written a small utility based on fzf that solves this problem by letting you comment aliases in your shell config file and fuzzy-search through them:
I like that idea of converting metadata to text, so you can do text operations on it that you are already familiar with, and then apply the changes back.
If your problem can be solved by just regexps, there's a "rename" tool installed by Perl I've used for decades. E.g.
rename -n 's/JPEG$/jpg/' *JPEG
Will apply the substitution for JPEG at end of the filename to just jpg. The -n will run without renaming so you can see what it intends to do.
Note that the rename tool bundled with some distributions is not the Perl version. If someone is looking to install it, it may also be called perl-rename.
I think folks should be aware of 'vimv' which opens your pwd in an editor and you just edit your filenames in 'vi' ... and all of the renaming happens upon saving and quitting.
Not sure if this is the official/canonical distribution, but for what it's worth:
Everything that needs manual entry can utilize text editor features greatly! Search and replace, multiple cursors, duplication, formatting etc. However, there will require some fancy ways to put result back in input sources.
To expand on this: if you look at a dir in Emacs then change the buffer to be writable, you can change file names and saving the buffer renames files. You can also edit the permissions columns in a similar way. And there is vc integration so eg eMacs will hg mv files in a mercurial repo.
$ ls / find # tweak until I have a list of things to rename
$ for file in $(ls / find...); do echo "mv $file $(echo $file | sed ... )"; done # this prints mv commands for renaming, I can inspect them as needed
$ for file in $(); do...; done | sh # execute the renaming
It is inspectable, easy to cancel (just don't pipe to shell), incremental... just needs wrangling with bash etc. which is a bit of a pain.
It works for a bunch of other all-or-nothing commands, which are potentially quite destructive; use a for-loop to generate the commands, tweak until you're happy with it, then pipe to shell.
That's the point, you don't need to remember anything, it would come naturally after a day of tinkering with PS/pwsh.
You just write it for your current needs.
This example was just a quick one-liner, it's very simple:
ls | % { # get the file list, cycle for each item
rename-item # obvious usage: rename-item oldname newname
$_ # pretty obvious, the source - place the loop variable here ie file name
-newname # explicit calling for a parameter name, isn't really needed
$( # subroutine start or eval if you prefer, to construct the new file name:
# 'text' + (take property basename from the file and regex it) + 'text'
'myfile_' + ($_.BaseName -replace '\D') + '.md'
) # subroutine end
} # cycle end.
My favourite non-standard tool is pv[0]. In it's simplest use case, pv is a replacement for cat that also outputs a progress bar to stderr. You can use it to add a progress bar (or multiple) to just about any pipeline. I love it because I know if the one-liner I wrote is about to finish in a couple minutes or if it will be a while and I should either write something more efficient, or do something else while I'm waiting.
I could really use a better workflow to refine grep matches. Has anyone made a tool that combines grep (regex search) with fzf (multiple positive/negative patterns)? What I really want is something like:
The problem is this loses filenames and context lines in the output. I want to apply several positive and negative regexes, and only at the very end annotate with filenames and context. Anyone have a good workflow for this?
I do things like that inside Emacs with the consult-grep command from the Consult package, combined with the Orderless matching style, Embark to collect the results in a buffer. This has several advantages over the command line:
- Interactivity: the results are updated live as you type, so you catch typos sooner and can tweak the search terms as you go.
- The buffer of search results you collect with Embark is not dead text like it would be in the terminal, instead, each line is a link to the corresponding file taking you to the line that matched.
- Wgrep lets you edit all the matching lines in place!
There's a ripgrep ticket for multiple patterns. No one seems to have come up with a good specification for exactly how it ought to work, so no development has started.
I use skim / `sk` for this kind of task: https://github.com/lotabout/skim . For example, for iterating on jq incantations, I have in my shell rc file:
function jqsk {
sk --tac --ansi --regex --query . --multi --interactive --cmd '{}' \
--bind 'enter:select-all+accept,ctrl-y:select-all+execute-silent(for line in {+}; do echo $line; done | pbcopy)+deselect-all' \
--cmd-history=${HOME}/.sk_history --cmd-history-size=100000 \
--no-clear-if-empty \
--cmd-query "cat $1 | jq --raw-output --color-output --exit-status '.'"
}
I should look into whether fzf or any of the other tools in this thread can be coerced into doing the same thing, the above is pretty verbose and arcane, and it doesn't always behave the way I expect.
Not exactly what you’re asking for but similar: I wrote a script called aag, based on ag, that lets you find files that contain multiple matches anywhere in the file. It’s a per-file logical AND of multiple ag searches.
Example: you’re trying to find a file that contains the strings “MIT License” (case insensitive), “npm install”, and a match for “react-.*” somewhere in your home directory:
The excessive dashes are necessary so that it’s possible to pass different options to each separate invocation of ag.
In case it’s not clear why this is useful, normally ag (or grep) searches are linewise. It’s not so easy if you are looking for things that occur on different lines, and possibly in different orders.
Just posting this without trying it out...grep -nH gives you line numbers and full filepath context, and as another commenter said, -ve allows you to string multiple excludes, so grep -inH <search> -ve not_this -ve nor_this -ve nor_that.
Unfortunately I've not put in the effort to learn awk past printing the n-th column of ls (my typical use) - the extra syntax required to properly 'quote' and {} things puts me off.
My personal go-to for intelligently tailing files is lnav (https://github.com/tstack/lnav) tho it has crashed a couple of times for me when applying a bunch of filters, etc.
Is anyone aware of any other comparable shell tool for tailing a set of logs?
Sorry for the crashes, have you sent the crash logs to support@lnav.org or opened an issue on https://github.com/tstack/lnav/issues ? I try to take a look at crash reports, but I’m not always able to figure out the issue from just the logs. If you have some time to spare in replicating the problem, I can take a deeper look.
I do try to submit the logs when I can. tbh, it takes a combination of (In/Out) text filters along with some navigation to trigger the failures. If I re-encounter, I will try to spend some time to reproduce.
You can find the last failure I submitted here[0].
Oh it runs stuff like curl. I use html2text just about 5 times a day,
and want better ways of working with html _as_ text without a browser
of an kind.
I would broadly categorizes these new tools into two categories: 1) those that are adding real value and make things easier, and 2) those that just add bling to existing tools. An example of the former category is ag, an example of the latter is duf. Also, lots of old tools can do more than you think. For example, did you know good old top can display a bar graph of CPU load for each core just like htop? Just press 1 and t after starting top. You can even make the output colored (press Z and enter), although admittedly the default color scheme is not great.
I like tools that come with nice defaults, so that I don't have to learn tricks like that. To make a somewhat bad analogy, one of the reasons that Ubuntu is so popular vs other distributions is that you have to futz with it less.
Which isn't to say that it wasn't neat to learn that top can do that, but I'm probably still going to continue using htop.
How many times have you wanted to dedup a (text) file, but definitely didn't have enough memory to perform the task? I found this one day when I had to dedup a set of .ndjson.gz files which totaled a cumulative 312 GBs. Utilizing the bloomfilter option, I was able to dedup the records without any large investment on my part.
Anyways, runiq[1], "[an] efficient way to filter duplicate lines from input, à la uniq".
It provides several ways to filter of which I almost always default to utilizing the bloomfilter implementation (`-f bloom`).
Interestingly I was also about to post that autojump was missing, checked the comments, saw yours, and rechecked - sure enough, autojump is in there! So, your comment was useful after all.
Ah, htmlq [1] is a missing one that's not on the list!
Straight from the repo: "Like jq, but for HTML."
I find it useful for quickly hacking scripts together and exploring data. Very useful for the iterative process of finding good CSS selectors with the data that I can get without javascript running.
Not sure if pup supports this but something I do use fairly often (and copied into my own internal tooling) is the ability to filter out results as a flag in the CLI.
For example, something I usually do is:
curl --include --location https://example.com | tee /tmp/example-com.html | htmlq --base https://example.com a --attribute href --remove-nodes 'a[href*="#"],a[href^="javascript"],a[href*="?"]'
This grabs the page, shunts a copy to /tmp for subsequent, iterative testing, then tries to grab all the links while filtering out any links that have a '#', '?', or start with the word 'javascript'. This is super helpful when I'm just exploring some HTML scrape and trying to build a graph of links without having to pop out a proper programming language just yet.
broot completely changed the way I navigate directories on the CLI over the past year.
I was an 'ls' purist before, I've tried various CLI file managers in the past and they all felt like they added too much friction, with the one exception of nnn which I briefly used before finding broot, which just feels really fluid and natural.
Broot has many features that feel natural... when you know they exist (for example, try looking for what's taking space with `br -w` then stage the files and remove them at the end).
I mostly use zsh's globbing for this these days; the syntax can be a bit intimidating at first, but it's not that hard once you get used to it, and it's very quick and easy to type.
Bash doesn't really have any of this, so if you're a bash user you're out of luck. Don't know about fish.
I'd like a command line tool that could transform `history` into a list of alternate tool recommendations. Bonus if it could also generate recommended aliases and functions according to your habits.
I use pandoc to convert markdown to powerpoint decks, it's a great workflow as you can preview and tweak the content and then apply the firm theme before the presentation.
Yeah, I just like the terminal aesthetic. And I've had a problem in the past with sales bros taking internal only presentations that aren't supposed to be consumed externally for a lot of reasons and showing them to customers. Markdown seems to be enough of a barrier to keep that from happening.
Glow is absolutely gorgeous! I wasn't aware of it before reading this article and I love it so much! I write most of my notes as Markdown files and Glow is the best tool to browse through them.
direnv should have been part of unix/linux/posix environments from the start. Whenever something's install steps tell you "go add this export statement to your .bashrc" or similar, it never sat well. That sort of thing should be scoped by path, and shell independent -- exactly the thing direnv enables.
I never understood why people were so enamored with direnv, but this makes me understand it a bit. I don't think I need it right now, but I'll keep it in mind for this use case.
I read through the direnv docs, but I'm trying to think of how I'd use it. Can you or anyone else give me some examples? Even just listing which directories you have .env or .envrc files in, and what environment variables you use in them.
The approach of using direnv and putting .env files in directories encouraged me to pull more variables from other places in config files, or shell scripts, or global variables in a programming project, and put them in environment variables instead. It's very aligned with the 12 factor philosophy [1].
Also, many scripts start by just defining a bunch of variables -- and many of those, in terms of other variables. Say, a PROJECT_ROOT directory, and a PROJECT_DOCS directory defined relative to that, etc. Then, a bunch of command parsing logic, after defining all those default values, so the user can set values of their own. Then, finally, the script can start doing the thing it was put there for in the first place.
With the .envrc approach, some of that stuff is pulled out of the script (making it shorter and simpler) and considered part of the directory environment.
My biggest use is for configuring database connections. Create something like `FOODATABASE=postgres://user:password@server:port/name` and then whenever I am inside the project, I source the FOODATABASE environment variable wherever it is needed. Another convenience pattern I use with Django is to have a PROJECT_IS_DEBUG key -iff variable is defined, enable extra tracing functionality without requiring any development specific configuration files.
Example server pattern to default to production:
if "PROJECT_IS_DEBUG" in os.environ:
DEBUG = True
ALLOWED_HOSTS = ["*"]
else:
# production configuration by default
All for a one-time configuration setup. A further boon of this workflow is that systemd natively supports an EnvironmentFile configuration, so you can re-use the same configuration format from development to production.
Projects I work on wind up with a bin/ directory containing any number of little helper scripts and tools. With direnv that dir can automatically be first in your PATH when you're in the project, but not on it at all when you're elsewhere. Ditto for, say, putting a Python project's virtualenv/bin folder on PATH or a NodeJS project's node_modules/.bin.
Projects I'm on also tend to wind up with an etc/ directory that configures things like PATH and tab completions for a good baseline experience - think of it as standardizing and isolating the snippets many projects tell you to put in your ~/.bashrc to work on the project.
Direnv makes it easy to automatically load those, too.
It really did revolutionize the way I work, by making it trivial to make projects much more self-contained, the way I'd always wanted them to be but hadn't been quite sure how to achieve.
I've only used Nix for managing my personal installed package on OS X so far, but I believe direnv works really well in tandem with Nix - use a Nix file to define your project's dependencies and use direnv to automatically activate all those dependencies whenever you're in the project's directory.
I place a .envrc at the root of our monorepo with a handful of env vars that configure stuff like PATH (to point to the various scripts, tools, and executables that are used in the dev environment), override the sane default config files with an environment variable (extremely useful when chasing down specific bugs) and to redirect or control logging and debug info when running tools and systems.
It makes it a breeze to pull in a commit and set up a development harness that pokes at whatever thing I need to poke at in my local environment. And it does it without changing a line of code or command line invocation, which is a big deal in polyglot environments with various build constraints (not passing -DFLAG=thing is enormous in a big C/C++ code base, for example).
Even just being able to point whether a service is looking at a dev/production/local service/database is a big deal if you've invested in IaC and don't want to mess with any config files to do your work (as .envrc is probably in your .gitignore).
Could not agree more. I've shilled it three times in the past week alone. And I've also taken to mentioning it in any README involving env vars. Awesome tool.
I've actually moved from direnv to shadowenv[1]. It's more powerful since it's using a Lisp dialect called Shadowlisp that lets you easily do things like append/prepend to $PATH, expand paths and other common actions.
For Linux, look at the eBPF tools - they're really useful[0]. I've used them to troubleshoot annoying problems before, and they're far more practical to use than the usual `strace -f ./program` dance.
Troubleshooting system issues can also be done with SystemTap[1], which looks cool, but I haven't personally tried it yet.
Perf. Extremely powerful. Mostly for profiling and also some scenarios of debugging. Takes a fair bit of getting used to and each debugging session need more setup, compared to just a quick strace. When working with containers it’s even more complicated to setup unfortunately.
Nice list of new tools to take a look at. However I think there's just one big drawback: as you move around different systems is very unlikely that you have those (you might even not have permission to touch those systems) so you will have to rely on the standard existing tools (even among different OSes you have slightly different implementations). That's why I lately I've investing more time reading man pages of some tools, maybe I'm missing an already existing cool feature. You can learn a ton of things from man pages, and if you follow the "SEE ALSO" section you might even learn some new commands.
I got very excited about this tool. It's exactly what I need. I wrote a custom calendar app in it. But I got disappointed by performance. A list of 10 items takes 3 seconds to draw. I yet have to check if that is not caused by some other tool in the call stack. I wonder what is others people experience with performance of mlr?
I also use regularly miller/mlr with files having 1M+ lines and I never had problems with the hardcoded processing (i.e. "verbs") nor the DSL language which is know to be much less efficient compared to verbs.
Both visidata and miller are essential tools to process/view CSV/TSV files, way better than LibreOffice or Excel in terms of performance on large files.
We might be victims of our own success on some of this. We've never had a major security hole in Mosh (after ten years since 1.0). We're really proud of that! But that also means we've never needed to issue a security update, which some people use as a proxy for "are people looking for security holes in this project."
After a few years without an active maintainer, as of a few months ago we now have a group working slowly but actively towards a Mosh 1.4 release. I think the main benefit people are expecting will be support for 24bit color escape sequences, but I'm also hoping we can get some fuzz targets, etc.
pwru (https://github.com/cilium/pwru) is a fun new tool from the Cilium folks for tracing network packets in the kernel. Like tcpdump but you can trace the full path of the packet including kernel syscalls. Lets you debug much deeper than "when the packet gets to this port it gets dropped".
I think tig warrants more than just a passing mention.
For me it's the best complement for a command line focused git usage, because it offers a far better (but still streamlined) experience for staging chunks than "git add -i", through its "tig status" TUI interface.
One thing that I never bothered to research if it can be modified and the defaults bother me a little is that it does not use vim keybindings (gf for loading a commit, for example).
Awesome, thanks for that! It has a bunch of useful stuff merged, but it's still missing the export tree view feature (https://github.com/wagoodman/dive/pull/324), which can be handy.
Related to ranger, nowadays I use lf[0], which is a clone of ranger written in Go. There's also pistol[1] which is a replacement for ranger's rifle file viewer
ranger is fancier (both visually and in terms of configurability), but nnn is significantly faster in my experience. I kept putting off looking into ranger's rifle capabilities and Python scripting, but after having it crash on me a couple times - in a directory with many hundreds of files, to be fair - I tried nnn. The speed and simplicity made me switch over entirely, because it has just the right amount of features that I wanted from ranger. (Well, the one thing I kinda miss is the inline Markdown preview in ranger - shortcut `i` - but it's a minor convenience at best.)
MC only here too. On Amiga Directory Opus (DOpus) was amazing. Their windows port came late and isn't so essential, though I can't imagine how people have the patience to do any file management with awful normal explorer. Luckily not a worry of mine, I live in the nix/nux terminals.
bat is certainly a game changer. I never printed things out to the terminal - always did a `less` or a `view` - before I installed bat.
One (very) minor issue I have with it is that when the file is very small - a couple words or lines at most - then the "decorations" it prints around the content get distracting and make the content slightly harder to read. Maybe I should just write a wrapper around it that does a `wc` first and decides whether to do a `bat` or a `cat`.
Every time I see these lists, I tell myself i'll start using these, but always forget.
I would love a meta tool that, every time I use one of the old tools, it automatically gives me a notice to use the new one instead, so I can learn. Then again I guess I could alias the old ones to point to the one one.
When you hit ctrl+t, you'll get mdfind's output for all files in the current working directory and below, auto-piped into fzf for fuzzy matching. You can do something like type "vim " then hit ctrl+t, find the file, hit enter twice to start editing it. I wrote a quick guide on installing fzf/fd/ripgrep with powerlevel10k/oh-my-zsh/zplug on MacOS, because I would get asked how to replicate my setup: https://gist.github.com/aclarknexient/0ffcb98aa262c585c49d4b...
None of the "replacements" interest me. When I try the "new" ones I cannot get interested in them either. When trying new programs I am looking for whether a program can do something essential that I cannot do myself, e.g., with shell scripts. However I am beginning to think another reason is that these "new" utilities are consitently too complicated, e.g., too many options. It often seems as if the authors are trying to show off their programming skills with some new language they are trying to learn, e.g., Go or Rust. Comparing these new programs to original UNIX utilities, many (most) of the UNIX ones seem comparatively simpler.
I write quick and dirty single-purpose utilities for myself because I want relatively simple programs. I try to keep program size reasonably small so I use C not Go or Rust. I avoid creating options. FWIW, seems like that was true of most of djb's utilties, too. Some of the best "new" UNIX utilities I have seen have come from from people who use djb's C functions and/or copy his programming style, including the preference for small program size and few-to-no options. For example, the authors of runit or s6 have some interesting utilities. The author of tinysshd has some useful ones as well. These projects are generally not popular but they are generally high quality, IMHO.
There are some JSON utilities listed on this blog page. Despite so many options for libraries and programs to process JSON, I still cannot find one that does something very simple: 1. extract JSON from HTML, 2. print it in a customised, human-readable, left-justified format that 3. makes it easy to process further with other programs. Hence I wrote a stupid program for myself that extracts JSON, prints the keys and values left-justified, making it easy for me to read with less(1) and to process with traditional UNIX text-processing utilities. (I am probably mistaken but I believe the one I wrote may be able to operate at the same speed regardless of the size of the JSON data. This needs to be tested.)
I have written several shell scripts over the years that have a subset of the functionality of fzf (or a superset of urlview's, which came much earlier), namely finding and selecting items from lists. I am still not a fzf convert because generally it does not do anything essential I cannot already do myself. Plus it is larger, more complex and generally slower.
Someone once said the best interface is no interface. For me, the less required user-interaction (including selecting options), the more powerful the utility.
The article on The Verge looks like it's about mobile phones, i.e., pocket-sized computers meant to be operated with touchscreens. Granted, today we can plug tactile USB keyboards into some of these "phones" and we have options like Termux, but the subject here, unlike the The Verge article, is command line programs.
Maybe no one besides me ever actually said those exact words with respect to command line programs. What did happen is someone wrote that he found user interfaces on command line programs are not "good" interfaces. He then suggested that writing command line programs that did not have to parse options could be a "security" tactic when programming. See "5. Don't parse" in the text file below.
I find the older utilities are more "ergonomically pleasing" and improve "quality of life" given the generally poor quality of today's software. Indeed the "switching cost" would be high, as I would lose the ergonomic pleasure and quality of life improvement, not to mention the level of efficiency and productivity, I get from the "classic" UNIX utilities.
>consitently too complicated, e.g., too many options
At least one of them (fd) was designed specifically to be simpler to use than its UNIX counterpart (find). And others though as simple to use (ripgrep, scc) they're much more performant than the tools they replace (ack, cloc).
I made a tool back in college called “line” for outputting ranges of line or column numbers.
I got tired of piping head into tail and found it simpler.
Examples:
line file.txt 5 to 9
cat file.txt | line —column 4 to 20
I thought “line” was very Unix sounding and kinda cute, but like a lot of these projects would never make its way into the gnu utils so I thought what’s the point. That and of course to a beginner Awk user, those kind of operations are child’s play. I thought about csv and printing lines between matching words, but it’s all about KISS.
I'd much prefer an ecosystem of interlocking small commands with uniform options than using classic UNIX do-everything tools such as awk. These generally have terrible syntax that's impossible to remember. If I'm gonna drop to a sublanguage other than the shell, I'll use something sane like Python.
To me it feels like someone could rethink the unix tools in the way you're talking about, and maybe it could become a successful and widely-adopted project. Because a suite of tools like this could live alongside the old unix stuff.
The main challenge to doing this well is having good taste and experience. It may be that if you really tried to do this, you'd face tons of UX challenges, and you'd find that tools like awk really were in the sweet spot already.
awk is a lot better at what it does than Python though, and much easier too.
Even something simple as "cmd | awk {print $1}" in awk is something like:
import sys, re
for line in sys.stdin.read().split('\n'):
s = re.split('\s+')
if len(s) > 0:
print(s[0])
else:
print()
And I probably got that incorrect as it's probably not the way to read stdin (it's been a while since I programmed Python).
I can list many gripes with Unix tools, but "awk bad" isn't on that list. It's a small language that solves a very specific problem, and does that surprisingly well. The syntax is about as simple as it gets – no idea how it's "terrible" or "impossible to remember", and it's certainly not "insane" as you seem to be suggesting.
It's fun to write your own tools! (Scratch your own itch and all that.)
It sounds like your "line" replicates a use of sed that I use all the time, printing a contiguous range of lines.
The example:
line file.txt 5 to 9
can be:
sed -n '5,9p' file.txt
You mention using awk, which totally works, but to me is much less ergonomic.
You don't explain what line's "--column" option does so I'm not sure what the equivalent of that might be. That might be where awk comes into its own ... :)
> [...] and printing lines between matching words
This is the same sed command as above, but using regular expressions for the address part:
sed -n '/^func doit/,/^}$/'
will print just the function called "doit" (in properly formatted go).
Could you (I mean "one") design a "friendlier" (or more "beginner friendly") user interface than sed presents? Yes, obviously (you did exactly that). But unlocking the power (or even just beginning to "unlock" the power) of the standard tools (sed, awk, grep, tr, cut, paste, find, xargs, ...) can get you a really long way. Of course, the initial problem is how to know that one of those tools can solve the problem you have in your head.
("Bonus" sed content: replace "head":
Instead of
head -n 5
do
sed 5q
To replace tail you need tac (or "tail -r", haha))
Idk, the amount of Unix that GNU and BSD accepted as a bare environment, the range is single purpose programs like tail and mini interpreters which are powerful but require a lot of skill. If I worked in system administration, and I HAD to use shell, I'd hold onto Awk for dear life. But I don't, so it's sort of this ancient swiss army knife.
You can use Nix (https://nixos.org/nix) to install these tools on any Mac/Linux machine (nix profile install nixpkgs#{bat, delta}). Disclaimer: I packaged some of these tools for Nix :)
This is brilliant and I'm totally stealing this. I dislike Ansible too much to encode a lot of my workflow in it, but a make file is a much better proposition.
I use spruce for many thing but it's ability to merge y'all files smartly is very useful. Think global yaml merged with one of [prod, staging, dev].yaml, merged with override.yaml creating a deployment yaml. https://github.com/geofffranks/spruce
A little late to the party, but if you ever need to do basily anything with dates, check out dateutils (https://github.com/hroptatyr/dateutils).
Amazing collection of commands.
Semantics, semantics. If you mosh into a system, you wouldn't also ssh into it. In that sense, the use of mosh replaces the use of ssh and mosh would be used in similar scenarios.
The neat thing about mosh is that it doesn't rely on ssh. All you need is a way to launch a process on a remote machine and securely transmit a shared key back to the client. ssh happens to be a useful way to do this, but it's not required
Curiously most of those tools implement things Emacs have since decades. Obligatory #define NOFLAME, I was and to a certain extent a unix guy but after having jumped the ship to Emacs I start seeing in practice many aspects of unix inferiority respect of classic systems, far beyond the Unix Haters Handbook.
File renaming? Dired do that and more than many modern tools, not only in mere editing (wdired-mode) but also in selecting what to edit (marking via regexp, narrowing, manually select files "killing" others etc) in a far more flexible than an unix CLI tool piped to an editor, results are the same of course, easiness it's at another level.
Narrowing/Fuzzy searching? Similarly from Helm to Counsel passing through consult, ido, ...
#endif // NOFLAME
Anyway I still use CLI daily simply because much inhabited to it and for certain things it's quick, and I use some of those tools but the interesting part is not much the tool (dangerously aliased sometimes to overwrite original ones because who remember their name?) but the trend: in the last decade I see a kind of resurgent interest in unix and FLOSS, many that in the past have said "ah, yes, nix are powerful but I need to work no time to learn, ..." in the last decades have started much using GNU/Linux even inside Windows and more and more are accustomed to nix model. Emacs itself seems to have seen a sort of resurgent popularity and that's make me think: did we need so much time to learn?
I mean, we have had the IT revolution from Xerox. Just very few have understood its power and the GAFAM born out of it, starting from the first modern IBM in term of using and ruining Xerox PARC tech to size it's power still giving something to end users. The unix revolution succeed but again for most is a thing of the past and it's successful model is it's failure: they started saying that Xerox desktop model is just too complex and expensive people needs cheap and simpler things, the public agree unix succeed, in just few years they start realizing that no, good iron is needed, GUIs are needed etc and again the big iron era succeed but for a small period of time. PC era wipe it with again cheaper and crappy-er things. PC era succeed and in a moderately short period of time rediscovering of the past desktop model (without knowing it for most) happen, for instance the trend from widgets-based GUIs to document-based ones, the trend toward text-centric works and the recent interest in classic unix witch is actually the most common living vestige of classic model: not really end-user programming, that's Emacs, but at least composability via IPCs in CLI, from IDEs to editors, from DE to WM, perhaps tiling's ones, for some from unix model to Emacs.
Long story short I see a trend that "knowledgeable" people veeeery slowly rediscover classic tech, if that's really a trend and we (society) really need such timeframe to learn... Well... It's a bit sad...
Emacs community on Reddit, GH etc have grown and grown in peoples not really from IT, new lawyers, astronomers, ... ask for Emacs and that's for me means a growing interest, probably still too little to makes Google Trend changes, but clearly perceivable. Beside that real statistics are next to impossible so I can just tell what I see, and that's not a real statistic anyway...