Hacker News new | past | comments | ask | show | jobs | submit login
Become Shell Literate (drewdevault.com)
512 points by als0 6 months ago | hide | past | favorite | 330 comments

I used to think this way as well, but out of necessity, had to work specifically with a commercial IDE for some time. Turns out, if you are confident learning the keybindings of a robust IDE is worthwhile (e.g. you know that you must use it for some particular project for a decent amount of time) the investment pays off just as well as learning shell commands. A good IDE can do everything a cobbled together shell pipeline can, often in two keystrokes instead of N trial and error command inputs because you forgot, yet again that, e.g. while `grep` takes a pattern argument before a path, `find` does the opposite--in spite of POSIX there's an incredible lack of UX standardization when it comes to unix tooling.

People like to pretend that learning shell commands is somehow better because it's more portable, but unless your work entails working across multiple machines and environments daily, it doesn't matter. Most of us do most of our work on one machine, in one environment, so at the end of the day, it really comes down to preference. Personally, I do prefer working in the shell, but if you prefer working in a modern IDE, don't let anyone fool you that you're somehow wasting time or being less proficient than you could be in a shell--if you took the time to learn the IDE keybindings and the IDE is on par with anything Jetbrains puts out, you're not losing in efficiency at all and possibly making gains when it comes to certain tasks.

> People like to pretend that learning shell commands is somehow better because it's more portable,

No, it's better because if you aren't doing exactly the workflow an IDE or GUI tool designer has envisioned, it is almost invariably much easier to do it in shell (and then make it a script and then bind it to a command in your IDE or GUI tool, if they support that) than to beat the non-shell tool into, first, doing what you want, and then making it easily repeatable.

It's also more portable, which, contrary to your description, is very useful for most people, because even if there are some dev-only tasks that “I can only do it in the IDE” is fine for, for many things you may also want to do in a CI pipeline, on a deployment box, or other places where your IDE isn't running. Shell scripts generally work there.

I use IDE mainly for code editing and exploration (e.g. jump from usage to definition), for which I can't see how shell is easier.

Modern IDEs have shell built-in so when I finish coding, I can just open up a terminal inside IDE to run shell scripts. Effectively I'm getting the best of both worlds.

I never knew that I have to follow some sort of rails laid by IDE designers, which I think it's a very common misconception by people who dislike IDEs.

Shells inside IDEs are an idea that goes back to Xerox PARC workstations, the UNIX folks just have an hard time grasping that in spite of urban myths, there aren't other ways to achieve the same.

Even better, because IDE integrated shells are many cases graphical REPLs with additional interaction capabilities and graphical abilities.

As for the travel into memory lane, here out of 1977,


You can have a nice view how it worked, by following the Wikipedia links, or checking the presentation on Mesa/Cedar, which was the evolution of XDE as Mesa evolved into Cedar, and one of the latest version of XDE as well.

"Emulating a Xerox Star (8010) Information System Running the Xerox Development Environment (XDE) 5.0"


"Eric Bier Demonstrates Cedar"


And naturally Smalltalk-80 and Interlisp-D enviroments that precedded it,

"Emulating Smalltalk-80 DV6 on a Xerox 1186 (6085 Daybreak Development Kit)"


"The Interlisp Programming Environment", 1981


The lack of teaching of the computing world outside Bell Labs leads to a UNIX cult, unaware of the progress that was actually already available in the 70 and 80's, but unfortunely failed to pick up due to several reasons, so in the end there is this idolatration of the UNIX shell.

> Modern IDEs have shell built-in so when I finish coding, I can just open up a terminal inside IDE to run shell scripts

Sure. In GP I talk about the value of knowing the shell when you use an IDE, not the value of knowing shell instead of using an IDE. To get “the best of both worlds”, you have to know how to use the shell.

> I never knew that I have to follow some sort of rails laid by IDE designers

You don't, if you know how to use the shell. Whether the shell is integrated or external to the IDE is a side issue.

You missed the point that it’s about user preference. The shell works better for you, that’s great.

My setup is specifically crafted between my text editor (that has a terminal that I use all the time) and a few GUI tools that I’ve got customized and scripted. I like a mix of shell and GUI. That’s my preference.

You’re right that it isn’t as portable as my dot files repo (and I have one of those too), but I do actually have it set so I can quickly restore the whole config on a Mac and restore the text editor portion from Windows, or really any machine with a web browser.

If you are constantly using different machines on the regular, yeah, being good at shell commands makes sense. But it still comes down to personal preference and people who like other methods aren’t inferior.

"People like to pretend that learning shell commands is somehow better because it's more portable, but unless your work entails working across multiple machines and environments daily, it doesn't matter."

It is more portable. No pretending is necessary. It's true. I run multiple computers with different resource constraints and operating systems. I neither have the patience nor the time (not to mention the system requirements) to install an IDE on all of them. However each one has an OS that comes with a POSIX-like shell, e.g., NetBSD's sh, FreeBSD's sh, OpenBSD's sh and Linux's sh, which is derived from NetBSD's sh.

The author of this blog post begins his demonstration of shell wizardry with the "history" command. This command does not exist in POSIX sh. On NetBSD I use "fc -l 0". Go figure, it is more portable (nevermind fewer keystrokes). The scripts I write in NetBSD sh run on FreeBSD, OpenBSD, Linux, and a number of other OS without any modification. I do not have learn multiple shells to do work. On BSD, I also use the POSIX-like scripting shell (sh) as the interactive shell.

The best part is I do not have to install anything, it has already been included, no worries about system requirements. All these OS require a POSIX-like sh. None of them require an IDE.

OP didn't say it wasn't more portable--just that it being more portable doesn't always mean it's better. Sounds like for your use case, portability is very important and trumps the benefits of an IDE. That might not be true for someone who doesn't, say, run multiple computers.

"Personally, I do prefer working in the shell, but if you prefer working in a modern IDE, don't let anyone fool you that you're somehow wasting time or being less proficient than you could be in a shell"

I totally agree with that sentiment.

"[...] unless your work entails working across multiple machines and environments daily"

That situation might be a lot more likely than you'd think. I started out doing WordPress, and that's still what mostly what makes me money.

But now that I am using 30-40 different machines that host 3-400 sites, each with their own installs, and knowing how to do stuff efficiently with the shell is super useful.

To me, that situation feels kinda unlikely, but at the same time dang it's nice to just ssh into a machine and know what I'm doing.

I have coworkers and clients who just can't do certain kinds of tasks or troubleshoot stuff with certain kinds of tools because they just don't have the CLI knowledge.

So while I generally agree with your post, I will offer an alternate idea to balance it: if folks are thinking that learning how to use a shell is a waste of time because they can do everything in a GUI, if they took the time to learn the shell at a a level of most of the things they use every day in an IDE, they are "not losing in efficiency at all and possibly making gains when it comes to certain tasks."

Yes, the IDE is supposed to be faster for the things it was designed to do. That's the whole reason for its existence. But the shell allows you to do much more than your IDE was supposed to do. If you don't learn it, you're confining yourself to the boundaries set by the IDE.

And vice versa! The IDE will show me the inheritance hierarchy. It will help me find places where a method is called (and knows to skip like named variables and other methods that happen to have the same name).

The IDE as designed at XEROX PARC, and in most Smalltalk, Common Lisp, Java and .NET ones, has a shell as part of its features, there are no boundaries.

But if it includes a shell, you still need to learn the shell, don't you? It doesn't matter from where the shell was called, a real terminal or an emulation.

If you work with mechanical or electrical engineers, then you're going to run into Windows machines pretty quick, and then you need to convince them to install minGW to run your shell scripts. Sure, shell scripts are portable between Mac and *nix, but you're leaving out a large elephant.

Hopefully the next rise or WSL, Windows Terminal, and the OSS PowerShell will help that.

I’ve anecdotally noticed more people using or willing to use PowerShell in some scenarios, now that it’s at parity on Mac and Linux (there are a few things in old-school classic Windows PowerShell that don’t work on Mac/Linux but the future direction and support is all x-platform) and it’s certainly easier to get a Linux shell running in Windows than ever before.

Disclosure: I work at Microsoft but not on any of these tools.

Cross-platform PowerShell is fantastic. It's one of the first things I install on a new Linux box nowadays. I don't think I could go back to a string-based shell. Objects are so much cleaner and easier to manipulate.

Plus you get a huge standard library out of the box with its .NET integration.

It's really impressive what Microsoft has done with it.

I used git bash and it worked just fine.

But now you can use Linux subsystem for Windows. I don’t have much experience with this so I’m not sure how well it works in practice but most people seem to rave about it.

More than once did I type ls -lah into a command prompt window to be sorely disappointed.

Powershell has an ls alias.

Depending on your needs, the Windows port of Busybox might do the trick.

Personally, I don't like the shell that much. As others pointed out, it is a consistency-lacking set of programs that often do only trivial tasks (by design).

A proper cross-platform scripting language such as Perl, Python or Newlisp. The latter is very much like Busybox : it packs a lot in a small single executable (plus it is a Lisp-like language so the syntax is simple and the documentation is good).

Ironically 2 years ago they would have been running unix workstations and due to 2 or two software vendors they have been forced onto windows.

Architects also can use Mac's only part of the time. A lot of software just isn't available for anything other than windows (Rhino etc).

Git bash is something that nearly every engineer installs nowadays on windows.

Shell commands really have lasted... I mean if I think of the computer skills that I learned as a teenager that I still use today

1) shell commands

2) sql

3) vi

4) some programming languages c, pascal(?), basic(?)

Everything else has changed multiple times. Maybe that means its worth investing on learning them.

The thing is sometimes it's faster to goes to the shell use a grep -r/ag than to wait that 'jump to definition' does its job.. Depends of course the language and the project, but it's true for me at my work big C++ project.. CLion is awfully slow to start, VSCode starts fast but often I can find things faster in the shell than in the IDE :-( so I use it mostly as a dumb text editor.

Does your IDE run inside a 500MB linux container ? After cloud adoption, I would have difficulty finding software developers whose work does not entail running across multiple machines and environments. (Ok, the community of scientific researchers is probably exempt from this but they are a minority).

From my experience, the capabilities IDEs and unix tools offer are mostly orthogonal. Also, your arguments against consistent UX has the solution of just writing a simple wrapper around the offending tool (or using newer ones, e.g., ripgrep and fd). `tldr` also tells you the correct interface quickly.

I can just never get past the arg/flag inconsistency/complexity across commands. Perhaps if the docs started with a simple example of what has been seen over time to be the most common incantation for each command it might be tolerable. But as it is, I spend more time dealing with idiosyncracies of a command than I do expressively piping stuff.

Perhaps if Rust had five mutually incompatible borrow-checkers which get called based on who wrote the code for a particular language construction-- that might get across the frustration I feel when using the shell. (Well, honestly I just StackOverflow for the shell incantation I need and that seems to work well enough.)

You're right. I often think the man page should begin with a few examples, then launch into the neverending list of options.

That is no way to learn. In foreign language 101, they start you off with a small group of examples. "Como estas?" "Muy bien. Y tu?" Afterward, they explain the rules of the language (this is a noun, this is a verb, this is how you conjugate for first-person singular, etc.). In fact, this is how we learn our first language as babies. Anyone remember Mom and Dad pulling out a flip chart? Or did they just talk to you a lot?

The same goes with apprenticeships, I would think. The blacksmith starts the apprentice with simple tasks around the shop. I suppose he would intersperse it with the occasional pontification about principles and theory, but he would not sit down the pupil for weeks explaining everything before just letting him get his hands dirty.

The Linux man pages are upside down. Examples don't come till the very end, if at all.

Thankfully, like you said, there is now Stack Overflow.

I've really come to appreciate PowerShell exactly for the "-Examples" argument for Get-Help (or help/man alias). Although I still prefer a POSIX shell, I've been messing around with PowerShell and quite enjoy it.

In my hazy memory, the examples were somehow all about not the use case I was interested in.

> The Linux man pages are upside down. Examples don't come till the very end, if at all.

man pages are meant to be a full reference, not a quick tutorial.

To get the quick tutorial others have already mentioned cheat.sh [1] and tldr pages [2] is another good resource.

[1]: https://cheat.sh/

[2]: https://tldr.sh/

They can still be both! You can flip the general structure of man pages to be examples first while still having all the complete reference material after.

The downside of that is that searching for something would mean I'd first get the examples, and then the reference.

Or just use tldr, which already does what you want.

> man pages are meant to be a full reference, not a quick tutorial.

Yes, and for both quick and longer tutorials, there are also these things called books and courses - both of which are available in hard copy / offline as well as soft copy / online versions from many years now :) Many of us grew up using them and investing in them for our careers ...

+1 for tldr.sh. It's fantastic, I use it all the time, sometimes randomly for stuff I haven't got installed to find out whether I should homebrew it or not. Unlike man pages, tldr can search and find docs for stuff you don't currently have installed.

And it's fun to read. It's how man pages should be written, at least have a tl;dr before boring you to death.

Big +1 for tldr from me. And the commands are effectively a wiki (or maybe it's stored in git). Anyway you can contribute to them.

> Thankfully, like you said, there is now Stack Overflow.

It's interesting because we can imagine a history where the docs were written much more professionally with such examples. And we can imagine that work having lowered the barrier to entry such that a critical mass of users becoming compositionally literate in shell scripting. (And perhaps shellcheck being written much earlier in this alternate history.)

But Stack Overflow not only obsoletes such an effort, it IMO obsoletes becoming literate in shell scripting, at least in the way the author describes. SO's existence is equivalent to being able to write a natural language query on the command line which "automorphs" into the relevant Stack Overflow example. At that point you just need to understand basic piping, redirection, and enough of the syntax to spot-check the magic answer in order to make small changes for a use case. That's a different kind of skill.

>The Linux man pages are upside down. Examples don't come till the very end, if at all.

You only learn the tool once. Every subsequent time you visit the man page, the information you're probably looking for is frontloaded. Just scroll to the bottom if you want examples?

> You only learn the tool once. Every subsequent time you visit the man page, the information you're probably looking for is frontloaded. Just scroll to the bottom if you want examples?

This is false for most tools we don't use daily. It's false even for tools we do use daily, in some cases (I have to google git commands every month or so for commands I use rarely).

Most people are and remain perpetual intermediates: https://blog.codinghorror.com/defending-perpetual-intermedia...

The experts are the exception.

I bristled just now at the opportunity to have my first strong disagreement with Mr. Spolsky. Then I read the article you linked. I don’t think it means what you think it means. He’s talking about users of the software, not developers. And I know, I know, developers are users of software too, perhaps more than the average user. But then he links to this article about homo logicus [0] in which he begins:

> Of all the professional hubris I've observed in software developers, perhaps the greatest sin of all is that we consider ourselves typical users.

> We are experts. Who could possibly design software better than us superusers? What most developers don't realize is how freakishly outside the norm we are. We're not even remotely average-- we are the edge conditions.

Honestly, if you’re a hobbyist or moonlight as a FOSS contributor, fine, nbd. But if I found out an alleged professional working for me writing software was content to never figure something out like a git branch issue in the short term and grow over the long term in their total skillsets, I’d want them gone.

I wouldn’t want a lazy, mediocre carpenter to build my house either. I’m perfectly content to let them build a ramshackle residence for themself where nobody has to suffer because of it.

[0]: https://blog.codinghorror.com/the-rise-and-fall-of-homo-logi...

> I wouldn’t want a lazy, mediocre carpenter to build my house either. I’m perfectly content to let them build a ramshackle residence for themselves where nobody has to suffer because of it.

This is kind of funny since I'm European and we view most US houses as low quality McMansions :-))

If the carpenter does everything correctly I don't care if he doesn't keep everything in memory.

I always thought someday I would become a hardcore Linux hacker and know all the commands and flags but the reality is I have to relearn every time. Who spends that much time just in the command line these days? I’m sure some people do but I don’t know what they do unless its CTFs or security related.

Devops-style workload means a lot of the time spent in the console.

Even then, I lookup the quoting difference between $* and $@ every single time, and how to use `read`. Or more likely decide it's time to drop bash at that point...

docker save | xz is about the longest pipeline I used, and since compression doesn't provide any meaningful advantage there (except for maybe checksum), I reduced it to just docker save.

> Who spends that much time just in the command line these days?

Just stop using GUI utilities. It really is that simple. If you just don't use them you'll end up in a shell out of necessity because you still need to get things done.

Of course, the majority of my time is spent in my web browser reading documentation followed closely by vim for writing things. Actual time spent interacting with CLIs is a small minority at the end of the day.

I’m not sure why you think CTFs/security has an extra focus on the command line? Most of the security people I know spend their days staring into IDA…

As security person I run stuff in command line as many tools are there and they really are mostly scripts. I don't really need to do much shell magic to do stuff. If I actually need something special I will write a python script.

Fair points. After all, I know the keyboard shortcut to jump to the bottom (Ctrl-G).

According to less(1), the key is: “G or > or ESC->”.

You can also use the key on many keyboards labeled "End".

Sorry, I meant Shift-G. I can no longer edit that comment.

In most keyboards Shift-G is the same as G, otherwise it would be g.

Ha ha, you're right, in a way. It depends on whether you take G to mean the figure on the keyboard or the figure on the screen.

The figure on most keyboards is G. Yet when you press it, it puts on the screen, g. Chromebooks are better in this way. Their keyboards are labeled in lowercase.

I actually went back and forth between saying Shift-G, or just G, for this very reason. So I erred on the side of clarity.

I thought the way you wrote it was clear but these comments got me curious what the various conventions might be so I took a quick look at :help in Vim (since it lists an awful lot of key bindings). I'm now officially confused and don't think you can go too far wrong.

In some contexts :help notates things as characters (ex zh, zH, and z<CR>). In other cases I'm seeing things written as <S-F11> and <C-G>. There's also CTRL-H (instead of <C-H>) but I'm not seeing shift written out like that for whatever reason. Sometimes they get mixed (I'm not sure what the rules are) such as for hh<Space> and hh<C-]>. Amusingly enough, :help appears to treat Meta-{char} as case sensitive but CTRL-{char} as case insensitive (I assume there's a reason). I also spotted a <kPlus> (for the keypad).

What an amusingly pointless distraction!

It takes a long time to scroll to the bottom of bash man page.

> Perhaps if the docs started with a simple example of what has been seen over time to be the most common incantation for each command it might be tolerable.

You might enjoy https://cheat.sh which is usable via curl: `curl cheat.sh/grep`

Very impressive. I just made a function 'cheat' that takes a command name as input and curls this website. Thanks.

Thanks for sharing, that’s a great tool.

Using Fish as my shell somewhat makes this morn tolerable as it checks the manpages and automatically suggests flags in a tab-complete fashion. It also remembers previously run commands and suggests them, which helps with not having to remember exactly which flag does what, because the example comes from the last time you ran the command.

This is why I use PowerShell. Menu completion gives me all of the switches with their full name instead of cryptic abbreviations.

Perhaps if Rust had five mutually incompatible borrow-checkers which get called based on who wrote the code for a particular language construction-- that might get across the frustration I feel when using the shell.

You mean like the five mutually incompatible async solutions in rust?

The inconsistency sometimes bothers me too, but it makes me feel a little better to remember that so many of the CLI commands we take for granted are part of an old historical heritage. The inconsistency is part of that heritage--today's CLI wasn't designed all at once by one group, but rather evolved over 50+ years from many contributors, back when nobody expected that people would still be using `sed` in 2020.

For me, reflecting on that history helps dull the annoyance of having to type `grep --extended-regexp` but `sed --regexp-extended`.


> `grep --extended-regexp` but `sed --regexp-extended`

Why not use `grep -E` and `sed -E`?

That's what I do on the command line (though I went so far as to just alias `grep` to `grep -E`), but when writing scripts I prefer to use expanded flags for clarity. When going back to a script one wrote a year ago, it's way easier to grok expanded flags instead of a bunch of single letters that have to be re-deciphered. (Especially with something like `grep` or `sed` that have 1,000 flags each.)

> Especially with something like `grep` or `sed` that have 1,000 flags each.

Sometimes I actually read the POSIX man pages instead of the standard ones — they are sometimes more useful because they have fewer features. Sometimes they are less useful because of their verbose language, though.

> more useful because they have fewer features

That’s probably why DuckDuckGo’s `!man man`/manpage.me default to FreeBSD manpages. But if you really want something non-verbose you may prefer tldr.sh, cht.sh, or bro pages.

Hint: if you're still using grep to search files rather than a file, install and start using the "newer generation" of greps (ag, ripgrep etc). You likely won't believe the speed.

I use ag many times a day to search 10,400 files comprising about 800k lines of code, and it is in every meaningful way completely instantaneous. It also understands repository structures, and so won't waste your time there.

It seriously changed my (programming) life.

Integrating rg and fzf into vim changed my programming life. I find that combo pretty incredible and something I haven't found in IntelliJ. I usually find myself switching back to my shell and then back to IntelliJ.

Yes, I probably should have said "use ag in the context of emacs" just to clarify how it gets used.

At least when you go to Stack Overflow or read the man page, if you find an answer, then it is likely to work. In contrast, if you go to Stack Overflow to find out how to do something with a GUI program, there's a good chance that the answer you find is for a different version than you are working with, and does not help at all.

Depends. Are you on macOS?

What is your specific claim here?

macOS tools generally do not match what Stack Overflow tells you.

Ah, yes - I have come across some differences in command-line flags and default behaviors between macOS and the nominally equivalent tools as implemented in Linux or specified by Posix, and sometimes, on Stack Overflow, you will find people answering Mac-specific questions on the Dunning-Krugeresque assumption that it is the same as the others. In general, one should always verify Stack Overflow answers before using, but that often gets you closer to the answer you need than one starting from a scenario that simply isn't available on your version of, say, Xcode.

The same goes for systems derived from BSD - in fact, I wonder if some of the differences between MacOS and Linux comes from the former's origins in Mach/BSD via Nextstep.

> I wonder if some of the differences between MacOS and Linux comes from the former's origins in Mach/BSD via Nextstep.

My understanding was that this is most of it. The problem is relatively basic—macOS uses the BSD versions of most unix tools, which means basic commands like sed sometimes function quite differently!

This is a really big deal.

Almost nothing in computerdom was designed from a developer-centric perspective and it blows my mind.

Maybe I'm 'opposite brained' but it's the first thing that I think about when making something, sometimes at the expensive of the algorithm.

There's a bird in the back of my mind literally every time I use the shell chirping at me to translate them all into something consistent, and then make actually useful manpages for them.

The world is complicated, we have too much to learn, communications and ramp-up are essential and part of the product even if there's genius under the hood. (I'm also looking at you Rust, Git).

I recommend https://tldr.sh/. Since I installed `tldr`, I rarely read the man pages.

> Perhaps if the docs started with a simple example of what has been seen over time to be the most common incantation for each command it might be tolerable.

Rsync’s man page opens with examples, and even though I never actually those specific commands, they are usually enough for me to remember how to do whatever I wanted to do.

It’s even worse if you man test: you’re directed to the shell builtins page, that’s a few hundred pages long.

For builtins, use help. 'help test'

tldr [0] is great for easy to understand example commands. It is community driven and there are many cli based programs you can install[1][2].

[0] https://tldr.sh/

[1] https://www.npmjs.com/package/tldr

[2] https://github.com/tldr-pages/tldr-python-client

That python client seems very lack of maintenance. It doesn't have proper color code for Windows at all.

That's true. I thought that was the one packaged by most linux distros but debian packages the haskell client [0]. Looks like they have some windows builds too.

[0] https://github.com/psibi/tldr-hs#readme

It's even worse when you bounce between Linux, FreeBSD, and MacOS.

You might want to look into the ‘tldr’ command for those simple shell examples. It’s a very helpful command.

Lookup tldr on github. It works as a replacement for man pages 90% of the time.

You would probably love the tldr[1] command, then. It's a user maintained library of common example incantations of any given command. Nowadays I check it before `man`.

[1] https://tldr.sh/

Learning the shell was one of the best things I did as a programmer.

It helped a lot that 15+ years ago I committed to running desktop Linux as a daily driver. Especially back then when things were much rougher on the desktop, using Linux as a daily driver means occasionally doing something in the shell.

Ultimately, just like learning a programming language, one must have a practical reason, a project, to make it worth the while to learn as you go. For someone interested in learning to use the power of the shell, don't just go read a book or do some exercises or tape a cheat sheet to the wall. Instead, commit to using it for daily tasks for a month, or for maintaining a project entirely in the shell, not even using a GUI file manager. It's tough at first but so worth it!

At least it still gives you the choice. I learned a lot by stripping down my system and removing/replacing a lot of components to the extent that it didn't resemble Fedora (or Debian, or whatever) at all anymore. The amount of stuff you learn after completely trashing your only system and needing it to be restored and usable in a few hours…

Funny, I actually learned Linux the opposite way. I started using Gentoo. I know that seems like learning Linux on hard mode, but for me it worked well. I had dabbled a bit before that, but I didn't really get Linux, before getting my hands dirty in this way.

Hear hear! I remember a senior dev at my first job being stymied by the command line; he was very adept in the IDE, but had a hard time navigating directories. It was definitely a disadvantage when it came to getting stuff done.

One thing the author didn't cover is how you can share reified knowledge when you write shell scripts. (It's the same with other programming languages, but they aren't as easy to write or modify.) That is really really powerful, because you can not only share it with others but also with your future self.

I wrote about this more here: https://letterstoanewdeveloper.com/2019/02/04/learn-the-comm...

I used to joke at my previous position that I was a "programmer-lite", because I was in a support role and they wouldn't let me write code, but for some reason I was allowed to write and commit bash scripts, so all of the programmers started coming to me to write the shell script interfaces to their programs. I took pride in making the scripts "safe", running them through shellcheck and actually checking the exit status on things.

To this day, I've still never quite gotten the hang of the whole "trap" mechanic, though...

Yeah the way I think of this is that some people write a text or Markdown file with shell commands for documentation.

I INVERT this, and write a shell script with comments :) That way someone else can reuse your knowledge more easily (and your future self as well).


Constructing a big curl command to use the Zulip API, and also using jq for the first time. I used this to easily make a blog post out of a long Zulip thread [1]

https://github.com/oilshell/oil/blob/master/services/zulip.s... (oops some tabs snuck in here)

Figuring out how to use uftrace (and successfully optimizing Oil with it [2]):


Though one issue is that shell scripts don't really specify their environment, but there is a large number of tools growing around containers that can solve this problem. (Basically Docker is being refactored into something more sane; thank you to OCI and others.)

So I hope to integrate the Oil shell and containers more so shell scripts are more reproducible. I mean most of the container tools are already command line tools so in some sense it's already done, but you can imagine closer integration (e.g. not just passing code as text from outside the container to inside the container).


I wrote some notes about the documentation issue here: http://www.oilshell.org/blog/2020/02/good-parts-sketch.html#...

And one thing I've wondered is if Oil should literally run shell out of markdown, so you can create executable docs. I can see it being useful, but it might be something you should do with a separate tool that converts markdown code blocks to a shell script...

[1] http://www.oilshell.org/blog/2020/11/more-syntax.html

[2] http://www.oilshell.org/blog/2020/01/parser-benchmarks.html

I also do this with my personal scripts; I just use comments for commands and parameters I'm not using frequently, and then I can just grep the script folder and/or use an older script as example.

Writing that knowledge in some document is nice but in reality it's just more effort for no real gain, especially as it only helps if the relevant note can be found quicker than an online example/explanation.

yes, you're absolutely spot on - I'm part of a remote team and we made it mission to try and just use shell-commands to communicating business-centric admin stuff, such as: add this row in the table, modify this stored parameter, invoke that api call via curl, etc. They're all shorter than 10 lines usually but help illustrate exactly what needs to happen with very little room for interpretation.

This week I got my drill bit bound up in a double stud I wanted to run some wire through. The drill went into thermal cut out and wouldn't budge. I was at a loss of how to get the bit out without destroying the stud until I remembered I had a hand drill in the shop. It came from my great-grandfather. It's solid and well maintained and I was able to hand crank that drill bit through the rest of the wood. That's not the first time a simple hand tool has saved the day when a power tool let me down.

I agree that IDEs and the tools that accompany them are powerful and can make you more productive. Learning them is an investment that can pay dividends. On the other hand, most of the IDEs I took the time to learn in school are now obsolete. I still use the shell today because those skills are still relevant and have gotten me out of a jam plenty of times. I've chosen to prioritize learning tools that are reliable and lasting even if it costs me some productivity.

I wish that shell would be more sane.

I don't think that a shell should be a complete programming language. If you need a programming language, then better use one. There is xonsh if you are looking for something like this.

I think there should be a better bash with an very clean and consistent interface. Some of the most commonly used utils like awk '{ print $2}', sed, grep, sort, ... should be included out of the box to improve performance and to provide consistency across distributions.

> I wish that shell would be more sane.

Absolutely agreed, shell is in many respects terrible.

>I don't think that a shell should be a complete programming language.

On the contrary, I think shell should be a more complete programming language! Drop the stringly typing and add actual types (hence eliminating 80% of bothersome awksedgrep magic; yes, no need to tell me it's a real tall order), add proper error handling...

I truly value simplicity where possible, but I think that the primary interface that I use to communicate with my computer should be as powerful as possible. Numerous times I've built up a shell pipeline only to realize near the end that I need to do something that shell is horrible at, and had to redo the whole thing in Python from scratch. I cannot really see a reason to limit it.

>Some of the most commonly used utils like awk '{ print $2}', sed, grep, sort, ... should be included out of the box to improve performance and to provide consistency across distributions.

UNIX "philosophy" got a lot of things wrong, but not this one. These common utilities you mention are good especially when they're separate, because it's not the shell's job to improve performance, provide consistency, etc... of a dozen and a half different utilities, a number sure only to grow in time as people discover what commonly used thing they want in their shell. Though I'll give you that `awk '{ print $2 }'` should definitely be its own utility or a built-in.

This is exactly the idea behind Powershell. You don't pipe strings between commands, you pipe typed objects etc etc. I'm convinced that if it had better support on Linux it would usher in a new age if shells. As it is though, it would be hard to use it as a daily driver.[1]

[1] https://code.joejag.com/2020/a-month-with-powershell.html

Check out my pandemic project, marcel: https://marceltheshell.org. It hits that spot you described: still a shell, but more strongly typed. Instead of bailing out and redoing everything in Python (when you reach the end of your pipeline), you can add a bit of Python code in your command.

For example, find all the .py files under the current directory, find the ones that have changed in the last day, and then print the lines (with their filenames) that define classes:

    ls -fr \
    | select (f: f.suffix == '.py' and now() - f.mtime < days(1)) \
    | read -l \
    | select (path, line: line.startswith('class '))
read -l reads and labels a file, i.e., filename -> stream of (filename, line in file)

(Yes, this is doable with find and awk, but this is just an accessible example that gets across the idea of Python functions on the command line.)

> Yes, this is doable with find and awk

You don't even need awk for that one:

    grep ^class `find . -mtime -1 | grep py$`
Maybe you could try to use an example where it really makes a difference? Some natural operation that would be very cumbersome with plain shell but is easy and direct with marcel. Otherwise many people may fail to see the point.

Read a CSV file, foo.csv, and sort by the sum of the 4th and 7th columns (which are integers):

    read -c foo.csv | sort (*x: int(x[3]) + int(x[6]))
I suppose we could argue about what is a "natural" operation. Marcel grew out of a set of tasks that were "natural" in the domain I was working in. An important part of that domain was operating on databases, and clusters of nodes, and databases on those nodes. So marcel has features for those capabilities.

Partly, I was also scratching an itch: I prefer using Python to exploring the sublanguages of various shell commands. Much cleaner, in my opinion.

> Read a CSV file, foo.csv, and sort by the sum of the 4th and 7th columns (which are integers):

Yes, that's a much better example. Here the "obvious" shell solution involves using awk to compute the sum of the two columns and putting it as the first field, sorting by the first field, and then removing that field. I guess a "rosetta stone" of such examples (e.g. on the frontpage of the site) would make a strong case for the interest of marcel.


    awk -F, -v OFS=, {print $2+$3, $1, $2, $3}' foo.csv  |  sort -n  |  cut -d, -f2-

I think this code does not fulfill the task: you need to sum columns 4 and 7, and keep the rest of the data intact in the output.

You're right, that should be

      awk -F, -v OFS=, '{print $4+$7, $0}' foo.csv  |  sort -n -k1 -t, |  cut -f2- -d,


awk -F, -v OFS=, sets the input and output column separator to comma, '{print $4+$7, $0}' outputs the sum of column 4 and 7 before the rest of the line.

sort -n -k1 -t, sorts the file numerically on column 1, with comma separator.

cut -f2- -d, removes column 1, with comma separator.

This is of course not robust for general CSV files, but I don't think OPs marcel is either. A robust solution requires a proper CSV parser.

The biggest warts I see in the classic unix solution is that all the tools use different flags for the field separator.

edit: if you know that the csv doesn't contain tabs, you can omit some flags for a more concise

    awk -F, -v OFS='\t' '{print $4+$7,$0}'  foo.csv | sort -n -k1 | cut -f2-
since sort and cut default to tabs/whitespace as separators. If you're unsure about the contents of the CSV, you really need a proper CSV parser.

> This is of course not robust for general CSV files, but I don't think OPs marcel is either. A robust solution requires a proper CSV parser.

I haven't tested corner cases, but marcel relies on the python csv module, which is probably better than any initial attempt at a parser that I could write in an hour.

This is what I meant about sublanguages. Many people, (myself included), would need to go to the man pages to find the necessary arguments to awk, sort, and cut. I find it much easier to just write a little Python, even if the end result involves more typing.

At the end, it doesn't look that bad. Of course using the csv format is a bad start in unix. Much better to convert everything to tsv and work from there. In that case the "obvious" shell solution is quite clear.

    <foo.tsv awk '{print $4+$7,$0}' | sort -n -k1 | cut -f2-

Oh, and when you do bail out and go to Python, marcel's operators are available in a module (marcel.api), so that you can take your pipeline and move it into Python easily. E.g. here's the same example in Python:

    from marcel.api import *

    for path, line in (ls(file=True, recursive=True) |
                       select(lambda f: f.suffix == '.py' and 
                              now() - f.mtime < days(1)) |
                       read(label=True) |
                       select(lambda path, line: line.startswith('class '))):
        print(f'({path}: {line})

Since everybody else is peddling their better shells, I'll drop a plug for mine: https://rash-lang.org

It's a shell embedded in the Racket programming language. If you're familiar with Xonsh, it's similar, but both more powerful and less polished. It allows easy, recursive mixing of shell code (not posix compatible, but with a similar feel) and Racket. Its interactive mode is not polished (I need to write a better line editor), but it works, and it's great for programming.

I haven't worked on it much lately (I need to wrap up other things to finish my PhD), but this weekend I added support for user-programmable substitutions with automatic cleanup. As an example, beyond common substitutions like process substitution (IE <() and >() in bash), I added a demo “closure substitution” form that allows you to send Racket functions to `find -exec`. Importantly, this is something that a user could add, because Rash is super extensible and malleable.

Shell is a great DSL, but there are huge advantages to having an embedded DSL rather than a stand-alone DSL. Rash inherits all the cool features of Racket and can be mixed with other Racket languages (eg. Typed Racket, Honu, etc), has advanced features like first-class delimited continuations and the world's most advanced macro system (which makes Rash possible), and any shell script can import functions from Racket's catalog of third-party packages. Embedded shells like Rash allow a shell script to be copied from interactions like you do with Bash, but then grow past the “rewrite in Python” stage gradually with no rewrite -- just a gradual transition from being more shell code to being more “normal” code.

Though I'm kicking myself for continually not getting around to rewriting all the documentation, which is still very poor.

Keeping those small utilities separate is definitely the right choice as it keeps everything very flexible. I could just replace grep with ripgrep in my workflow and suddenly get a drastically quicker search without having to change anything else. If all those features were integrated in the shell the user is more or less stuck with the whole package.

Which is also one of the reasons I prefer using Vim over Emacs; I can embed Vim in any terminal multiplexer of my choice and combine it with various terminal programs. Every single part of that environment is easy to understand and easy to replace with another program or script. It's less homogenous and consistent, but I am fine with that tradeoff. As one might guess I am not a big fan of IDEs.

I've been trying to build a computer with a single principled HLL included that can also double as a shell. There's also http://www.oilshell.org which _also_ tries to be compatible with existing shell. It also tries to include features from awk and sed: https://www.oilshell.org/blog/2016/11/13.html

If these design points turn out to be over-constrained and have to make major compromises, yes the next thing I would try would be a small separate language for shell.

BTW I think the general idea of Oil's syntax is working out: start in "command mode", and then when you see the RHS of =, changing modes to Python-/JS- like expressions. There a few other cases where you switch to expression mode, like proc p(a, b) [1]

So far nobody has complained about any of this, I'm guessing because it looks very familiar, and that was intentional. Oil takes some pains to literally look like shell + Python syntactically, with better semantics.

I noticed that a few other shells are having problems with this command/expression distinction, and I discussed it like 3-4 years ago with Ilya Sher (of NGS) and a few other people. They were also having the same problem.

For example, does / mean a path separator or the division operator? Does * mean a glob or a multiplication? In Oil, this is no problem. It's obvious depending on the context.

Though I'm interested in more feedback on this, and the latest release is available to try as always :) https://www.oilshell.org/release/latest/


The awk and make integration is still doable but not done. Awk might require a notion of "lazy expressions" or "lazy arg lists", which would be shared with dplyr-like functionality. Oil's Zulip is open for discussion on these ideas :)

[1] https://www.oilshell.org/release/0.8.5/doc/command-vs-expres...

If you think the unix shell sucks, try fucking around with the Windows shell - either of them. You're either in a world of legacy and inconsistency dating to DOS and the days of 8.3 filenames, or you're in the also wildly inconsistent, but very verbose, world of PowersHell.

About powershell's verbosity: My pandemic project is yet another pipe-objects-instead-of-strings shell, marcel: https://marceltheshell.org. It's "home" is Linux, not Windows.

Marcel is also somewhat verbose. It exposes Python functions on the command line, so when you need to write such a function, it starts getting long. E.g. if you want to sort files by last time modified:

    ls | sort (f: f.mtime)
There are abstraction mechanisms to help deal with this. E.g. this defines a command to sort piped-in files by time.

    bytime = [sort (f: f.mtime)]
So you can then do this:

    ls | bytime
Still kind of wordy. I'm thinking about a way of having flags expand to filters. That would get things as compact as bash.

Yeah, Perl started out a bit like this. I never adopted it for myself or my company, but I did get a chance to have lunch with Larry Wall and talked to him about Perl. This was back when Perl was quite a young and promising language for a certain class of problems.

Perl had very good performance, despite being a "scripting" language, because so many of the tasks that a typical shell script might need to do were supported by built in features in the language (for example handling regular expressions, interacting with command line arguments, exec'ing other programs).

Today, once a shell script requires very much logic, I'm inclined to use Python. I know Python and find programming in it easier than Perl and bash and other shells. Furthermore, other programmers understand Python better than bash.

Oil is making shell more sane, in a way that you can realistically adopt, since it runs existing shell scripts. See:

Four Features That Justify a New Unix Shell http://www.oilshell.org/blog/2020/10/osh-features.html

> I think there should be a better bash with an very clean and consistent interface.

Absolutely! Some work should be put into making it way more intuitive. Having to remember what every flag means (which is different in every app!) is not intuitive at all.

By default there should be some sort of intelisense auto-complete which can also provide guidance on what on earth all the flags mean, and maybe eventually it could tell you what it expects to do before you run the command. For most users otherwise it ends up being "paste in this command you googled with a load of random flags you don't understand and won't remember"

Just suggested it in another comment, but Fish might be similar to what you describe. It has tab-completable flags that it gets from the manpages, and remembers and suggests previously run commands which might be close to the auto-complete you're after.

Fish is better, but it still only gets 1/10th of the way.

Why does terminal have to look like terminal? Like why is the 1980's ASCII telnet style the only way to do this?

The ASCII fish image on boot and fully ASCII menus only reinforces again that this isn't a modern interface, its an improved 1980's interface.

Why, for example, can autocomplete not look like this? https://code.visualstudio.com/docs/editor/intellisense

Why do progress bars when you are downloading from pip not look something like this? (still in-line with the terminal like a sparkline) https://docs.microsoft.com/en-us/windows/win32/uxguide/progr...

Like surely everything doesn't have to be SO 1980's if we want shell to stay relevant.

because then your shell will be slower than a snail.

for real though, after looking at your examples I feel that features like those would fall on your terminal emulator to bring to the table, not the shell itself.

specifically, I don't want to imagine trying to connect to a headless server during some crisis over bad wifi and have the shell think it needs to send a bunch of graphics to me.

This is exactly what I’ve been working on with Fig. Intellisense exists in every modern IDE but not the shell.


It wouldn't be coherently unixy if it was. Small tools that do a limited subset of things, and reliably take/output data from stdin/stdout is the way unix is done. sh is just the glue we use to tack it all together.

Bash (and other shells too) is like it is, because it's an accretion of 50 years of history rather than a singular top down design.

I think a misunderstood part of Emacs is that this is exactly what it is—-it’s just glue, but instead of being in a command-oriented environment, it’s in an editor-oriented environment. You run the same Unix tools but glue their results together into a text buffer with a full programming language that then lets you programmatically edit the results.

People joke about Emacs being an OS but it’s really just a different kind of shell, oriented towards text editing instead of issuing commands.

Without commenting on substance, Emacs is an application that rehomed itself on unix, but isnt a genetic unix application, Emacs was developed on ITS.

Definitely originally. I suspect that a modern emacs has more unix under the hood than it has ITS under the hood.

I am not sure if we should care about the differentiation between TECO-Emacs on one hand and Elisp-Emacs on the other.

One of the best things (in my opinion) shells could have done is do newline separated filenames instead of spaces, which would make filenames with spaces much easier to handle (you can have newlines in filenames, but in this magic world let's ban those).

Oil fixes this problem with QSN: http://www.oilshell.org/blog/2020/10/osh-features.html#safe-...

You can have newlines in filenames, but you can also pass binary data with \0 if you want (it's 8 bit clean). Or you can use read -0 for a NUL delimiter, and eventually length-prefixed blobs with netstrings.

The way I now think of it is that Oil should have all 3 solutions to "the framing problem" in networking: delimiting, escaping, and length prefixing. So you can convert from one regime to the other by using shell.

That would pose problems also. The real solution, with no need to magic as you say, is to disallow spaces at the filesystem level (just like slashes and the null character are forbidden). For users typing filenames, this shouldn't be a problem, as those can be encoded e.g., as unicode's non-breaking space. Using space as a separator is a very important power, that other programming languages share. In what other programming language can you put spaces in variable names? In the shell, filenames are variable names! It is only natural that a space cannot appear inside a filename (but of course, most other unicode characters can).

> In what other programming language can you put spaces in variable names?

PowerShell; although you have to use long form to do it:

    PS C:\> ${var with spaces} = 1

    PS C:\> Get-Variable *space*

    Name                           Value                                                                                                       
    ----                           -----                                                                                                       
    var with spaces                1                                                                                                           

    PS C:\> ${var with spaces} + 2

> In what other programming language can you put spaces in variable names?

In SQL, notably. Such identifiers need to be quoted, of course.

> In what other programming language can you put spaces in variable names?

ISTR that many dynamic languages allow this with dynamic assignment features, but you often need to use similar dynamic access functiona rather than simple variable access to access them.

Ruby also allows whitespace if it is unicode but not ASCII whitespace, without use of dynamic assignment/access methods.

SQL allows spaces in object names but requires quoting, similar to shell.

This would require filenames became valid utf-8. This would be a good thing to do nowadays, irritatingly (having tried handling Unix filenames as utf 8 in the past), surprisingly many users have non-utf8 filenames.

> This would require filenames became valid utf-8.

No, it wouldn't; you'd just get EINVAL if you pass a filename containing "\x20" to open or other syscalls. If a application wants to use "\xC2\xA0" in a filename, it can do that, or it can not do that, same as currently. Same applies if it wants to use, say, "\xAA\xFF".

You can have spaces in F# variable names and functions :)

I totally agree that file names shouldn't have spaces because they are variables, and I (usually) name all my files with camelBackNotation names as I was taught is best practice for variable names. That said, with every command I remember running, treating the file name like a string - e.g. "file name.png" - has worked whenever I encounter spaces, no escape characters required.

The problem is shell is when I want to write something like 'find . | grep namepiece | xargs rm', or something like that. I can work around this by piling everything into the find, or in some cases by liberally spreading -0s around, but neither feel very "shell-like" to me.

Thus it seems that a filesystem that forbade spaces in its filenames would solve your problems with the shell? Sounds good to me!

I don't see a functional difference in denying spaces in paths from denying new lines in paths.

My dream OS would forbid path names that don't match \w in grep.

> I don't think that a shell should be a complete programming language.

What do you mean exactly? If you have pipes and lists (e.g., the lines on a file) you are Turing-complete and people can and will make arbitrary programs. How would you like, precisely, to restrict the shell language so that it is not Turing-complete?

I mean shell should not be python, like xonsh. In my opinion it should not add programming language features (like types, futures, ADTs,...), but instead should clean up the existing built-ins + maybe little bit coreutils as built-ins.

Never heard about xonsh, it looks really cool. Seems like a step in the good direction (from python to shell). It needs a few steps more to attain the elegant perfection of the shell language, for example, removing this "type" stuff from the variables ;)

Your wish has been granted: awk '{ print $2 }' can be replaced by 'cut -f2'. And all of the commands are part of the set of POSIX command-line utilities having been included on Unix systems since basically forever, with awk (the one true awk) having been introduced only in 1983 or so.

cut doesn't use the same field delimiters as awk.

And awk doesn't let you easily select a range of fields like cut

I agree. There's nushell which looks a lot more sane, so I have hope that we won't be stuck with insane tools from the 80s for the rest of time! Maybe people will stop deifying those old flawed tools too.

If the data is small, then expressivness is more important. If the data is large, then multiple processes connected by pipes lets you employ more CPUs.

Have to pull this old chestnut out: https://adamdrake.com/command-line-tools-can-be-235x-faster-...

I am not the author of that piece, but the analysis is wonderful.

That pipeline sure does look convenient. Let's see how it will handle a path with spaces.

  $ git status -s | grep '^ D' | awk '{ print $2 }' | xargs git checkout --
  xargs: unmatched double quote; by default quotes are special to xargs unless you use the -0 option
  $ git status -s                                                          
   D "g h i"
I'd be lying if I said I was surprised, to be honest.

Yeah, it's annoying that unix filenames and shell quoting are both fundamentally broken and that (as above) people would generally (knowingly) write a a broken-for-spaces-in-filenames version by default because it's easier (and the author no doubt knew he'd not encounter spaces in his repo). Having said that, it's not very hard to fix, I'd probably write something like:

    git status -s | awk -vORS="\0" 'gsub(/^ D /,"")' | xargs -0 git checkout --
Which is about the same length. This will still break for malicious input (newlines in filename), but for the original use case that's not a concern.

But now you require gnu awk, it no longer works under posix (so for example busybox). Which sucks a bit.

It'll work with more than gawk (e.g. mawk as well) but yeah, it will not work with busybox awk. In the context of an interactive shell command I doubt many people care much about posix; indeed I struggle to recall any context in which I ever cared about posix compliance as such. Unless you have an personal or professional interest in fringe operating systems posix compliance is mostly of interest as an imperfect proxy for answering the question will it work on both Linux, macOS/iOS (and maybe busybox).

Anyway, I suspect the following is POSIX compliant:

    git status -s | awk 'gsub(/^ M /,"")' | tr \\0 \\n | ...

> Unless you have an personal or professional interest in fringe operating systems posix compliance

I mostly hit things like these with alpine containers and openwrt. OpenWRT I guess could be considered "fringe operating system", but alpine (at least for containers) seems reasonably mainstream?

Ugh, should of course have been

    git status -s | awk 'gsub(/^ D /,"")' | tr \\n \\0 | ...

You can fix it by replacing the awk with "cut -d ' ' -f 2-"

Haven't tested it, but you may also need to throw a "tr '\n' '\0'" in there and call xargs with "-0" to make it happy.

This will still break on files with a CR in the filename.

(CRs are allowed in unix filenames but, imho, they should not be.)

you also want to tell xargs not to run the command on an empty set, otherwise you'll just do 'git checkout --' which will just kill all your local changes, probably.

I think people who routinely write this kind of commands in an interactive shell are well aware of the shortcomings, they just deal with it when they need to and not before. One example is that people very rarely handle '\n' or spaces in filenames (e.g. using the -0 option from `xargs`). Whatever works is fine as long as one knows the risks.

For a one-off thing like the one mentioned in the article I'd say it's almost ok, although I'd put `git checkout --` in the category of risky commands if you don't have any backup.

I spend many hours of each day ~programming~ wrangling text files and I use macos + zsh + textmate2 for my daily drivers. I see shell as an important proficiency because it helps maintain a lower-level understanding of how the "magical" GUI "works," which often helps in debugging obtuse errors, and I'm sad to encounter more engineers who are completely unfamiliar with it.

When it comes to examples like that posted by the OP, I like the combination of piping/pasting to mate and multi-caret editing for most scenarios where others would reach for awk/xargs.

Here's me following the same example scenario but with multi-caret editing (slowed down slightly):


Step by step:

  1. git status -s | mate
  2. select " D" with arrow keys + shift
  3. command-E macos default for "use selection to find"
  4. option-command-F to find all (multi-caret editing starts)
  5. type "git checkout" to replace " D"
  6. command-left to move cursor to start of line, then shift-command-right to select to end of line.
  7. copy
  8. select all + delete (clear document)
  9. paste, 
  10. press return to insert newlines
  11. select all + copy
  12. switch back to terminal
  13. paste
To me, this is many small steps, but each step is more mechanical and flows naturally, and the general flexibility of multi-caret editing means it is applicable more often in my daily work.

This is exactly how I would have done it, just using my editor of choice (kakoune) instead. I also wrote on Drew's mailing list in case anyone is interested: https://lists.sr.ht/~sircmpwn/public-inbox/%3CCAKW6382rV5iW2...

Learning to use the shell and learning SQL are the two skills that have stayed relevant throughout my entire 20 year career.

Other tools and languages come and go, but the shell and SQL just keep on providing value.

Any resources/recommendations for truly learning SQL? I feel pretty comfortable with the basics, but could certainly stand to improve my SQL toolkit!

Haven't tried it, but have heard high praises here in HN for Jennifer Widom's SQL course. It's now on edX.



I'm a big fan of Markus Winand: https://use-the-index-luke.com/

I've got a bunch of SQL resources I've collected on my blog that you might find helpful: https://simonwillison.net/tags/sql/

> I can work with this. I add grep '^ D' to filter out any entries which were not deleted, and pipe it through awk '{ print $2 }' to extract just the filenames.

Best hope your file names don’t have spaces in them.

One of the downsides of the shell on Unix is that everything is usually text meant for humans so you have to spend an annoying amount of time dealing with delimiters that make it easy on the eyes but a pain in the ass.

couldn't agree more. waiting on nushell to mature so that rich data is the norm for shell stuff.

learning to be comfy in shell is still worth it though. It's saved me, at minimum, hundreds of hours of work.

Piping and shell commands are powerful, but generally a bad experience. You have to plan your command, try to run it and understand how all the piping steps work. If you get it wrong, it’s an annoying experience.

As an alternative, try Sublime with multiple selection (or other editors). With multiple selection skills you can transform your lines in a WYSIWYG interactive format which is much easier to work with for me.

(To use it well you need to know the following shortcuts: split selection to lines, select next, and the alt+arrows jumps to next/previous word)

This won’t mean you don’t need to learn how to use shells, but is still pretty fun to use.

Tips on that:

- Use "head" to reduce your data first and then quickly iterate on the pipeline. (Or sample [1] if head isn't representative; this is rare in practice)

- Use tmux (or at least 2 terminals) with shell. The left side should have your editor, and the right side should have your shell (just like the screenshot in the blog post -- that's what mine looks like too)

There is definitely a place for GUIs (IntelliJ CLion beats the GDB console for me) but you can get really far with vim, tmux, and shell.


> try Sublime with multiple selection (or other editors). With multiple selection skills you can transform ...

Yes! This saves me time consistently. I could wrangle a shell script, or where pattern-matching is essential, using text editor to select w/ keyboard shortcuts is VERY powerful.

Someone should write "become IDE literate" as a response to opening of using vim with no extensions and using grep a lot.

I've been using editors that are language aware since at least the late 90s. Depending on the language they'll show me all references, take me to the definition or declaration, stack those jumps so as I follow the links I can pop back a level to where I was. All of this is instant. 1000x faster than opening a shell and trying to grep all the project files for word phrases which, not being language aware, can't tell if one 'foo' is relevant or irrelevant from another 'foo'. They'll also let me refactor various things from renaming a method/class/variable/function and correctly fixing all the related files to doing things like changing class members to getter/setters and other things.

The difference is like using a hammer vs using a nail gun. There are times where they hammer is useful but given the job of constructing a building a nail gun will get the job done much faster.


Still, a lot of people start out programming with complex IDEs and end up being unable to run their code without the "play" button of their IDE.

Starting out, or only ever leaning an IDE is also hiding a lot of things from you.

For me, knowing the shell isn't about IDE vs. shell tooling, it's about, whatever you use, be aware and knowledgeable about the foundation and being able to do stuff even if there is no IDE. An IDE is a great tool, but it shouldn't be a crutch.

And vim without extensions - I'm pretty sure that's a rare thing among programmers.

Still, a lot of people start out programming with complex compilers and end up being unable to hand-write their assembly code in pure HEX without the "help" of their compiler.

Starting out, or only ever leaning a compiler is also hiding a lot of things from you.

Now, I am familiar with assembly, I live and breathe it honestly, but I don't gatekeep and say "you're not a real programmer unless you've written your own assembler macros". Some people are happy in their IDEs and see no reason to move to the lower-level of abstraction in their tooling. I don't see why we should fault or shame them for that.

Personally, I see shell as a poorly-designed, very error-prone language and I think we should move past it, onto far better tools.

Where did I fault anyone, where am I gatekeeping?

I'm talking about the befits of knowing more stuff.

I'm not arguing against IDEs, I'm only arguing about the benefit of also knowing other stuff.

Better tools: I'm very interested in, what I would in lieu of a better term describe as, Bret Victor's ideas and the example of Swift playground. At the same time, I'm very sceptic against everything that isn't a simple, human readable Textfile underneath. One thing I keep thinking about is making diagrams executable, but I haven't seen a graphical programming language that convinced me. Still, I think this could happen some day, if someone finds the right design.

By framing the IDE as a crutch, that's very much in the gatekeeping category.

Is "gatekeeping" now just another word for making people feel bad? Like "pretentious" or "elitist" has become? Nobody is saying that people with crutches shouldn't be allowed to walk.

edit: It's so strange to not only demand the right to not know how to do something without a complicated tool (a right which is inalienable and not threatened), but to also demand that people who do know how to do that thing not think that it is better to know how to do that thing. I get that when one has invested their time into a tool they want to defend its usage, but people who aren't dependent on a particular vendor's tool have also invested time, and should be able to think it's better to not be dependent on a particular vendor without it being considered violence or bullying (in the modern twitter sense.)

I mean shouldn't you feel bad if you're trying to discourage active engagement in a topic if one doesn't do it in a specific "correct" way?

Personally I've found a diverse set of viewpoints to be incredibly valuable on the teams I've been a part of. I have my own biases on testing, stability, performance and seeing how others approach it has broadened my understanding of how to build software.

To put it another way, the parent could have said "IDEs are awesome and I've found when you combine that with an understanding of the CLI you have an awesome set of tools at your disposal that are greater than the sum of their parts" instead of making it an exclusive trade off or implying that IDEs are limiting.

> instead of making it an exclusive trade off or implying that IDEs are limiting.

But that would be a completely different argument. Their whole point is that there /is/ an exclusive trade-off, which is different from what they're trying to say.

If you want my opinion, they both piss me off, just in different ways. The IDE requires me to use my mouse, even with the best Vim impersonation plugins. I hate that, but I put up with it because the autocomplete, syntax completion and indexing just works so much better than the Vim equivalents which fall apart under heavily load. And conversely, trying to edit remotely complex projects is a nightmare with Vim, no matter which distributions I use.

Which maybe is a different tone of saying exactly what you are saying.

I literally said that "IDEs are great tools" and my whole argument was that people can broaden their understanding by also learning some shell in addition to their IDE.

So not sure what your point is.

I'm sure that was your true intent, but the way you framed it made it seem like it was in exclusion(the dismissive "Cool." at the top didn't help).

At the risk of sounding patronizing as much as we'd like to everything to be binary pass/fail and survive on the technical merits or semantic details, how you frame things and driving communication in an inclusive way is important of you want to convince people that something is worthwhile.

I agree, but I can only control how other people understand my words to some degree, especially in a fast paced, low effort forum conversation. The prejudice of the reader will drive their interpretation more than my actual intent.

You can read my comments as gatekeeping or as the exact opposite - an invitation to come through the gate and see what you can learn here and take what is useful to you.

I didn't mean to frame the IDE as a crutch, I'm merely saying that you should try to not let it become a crutch aka. overly rely on it.

I recently worked with a colleague who couldn't run the python program he wrote without the IDE. The IDE was not installed on the PC we were testing the program on.

He is a better programmer than me.

And I'm not saying "people who are using IDEs don't know anything".

I'm literally only talking about the benefits of also knowing some basic shell and CLI stuff.

For a lot of people, the IDE does end up being a crutch. I was one of those people, and I can honestly say that over reliance on IDEs really held me back. I became a much stronger programmer when I ditched the IDEs for most tasks. I'll still use IDEs for a few select things, but overall they tend to hide a lot from you and you end up learning the IDE instead of the concepts of whatever language or framework you're working in.

> Some people are happy in their IDEs and see no reason to move to the lower-level of abstraction in their tooling.

True but you're also at the mercy of whatever company is responsible for developing those IDEs. I'm not advocating everyone do development in shell but I feel like you should know what's going under the hood when things go wrong.

> True but you're also at the mercy of whatever company is responsible for developing those IDEs.

With all the IDE companies being so ruthless, I wonder where are all those screwed-over IDE users running to.

I mean, we know the answer to this as it started the argument, right?... vim, emacs, and a handful of programmer-oriented GUI text editors such as Sublime Text (or whatever; I don't use them so I might be mixing up "what's cool" or be so out of date as to be laughable).

its always beneficial to learn technology at a lower level. if you are running Windows, its good to learn Powershell. if you have a refrigerator, it is good to know how to change your water filter, clean your fan etc. the purpose of the article is simply stating that familiarizing yourself with posix shells is beneficial to a modern day programmer and I don't think anyone can argue with that.

> Still, a lot of people start out programming with complex compilers and end up being unable to hand-write their assembly code in pure HEX without the "help" of their compiler.

I will reply with a car analogy. You can be a professional driver even if you don't know how to write the chemical formula for the combustion in your motor. You can not be a good professional driver if you don't know how to change a flat tyre or check your oil. No matter how good your driving is, if you cannot change the oil you are an incompetent driver. Maybe in the future, cars will not need oil. So good, but today they do and you need to deal with it.

Likewise, today's computer systems rely on the shell. If you are shell-illiterate, you are an incompetent programmer, no matter how good your programs are.

>> Still, a lot of people start out programming with complex IDEs and end up being unable to run their code without the "play" button of their IDE.

Yep. Another thing I hate is IDEs that build their own project files that tie you to them. A good IDE works with standard tooling, not as a replacement.

I could not agree with this more. I've worked on projects with some unholy combination of (gnu) Makefiles and VS Solutions. I've never "got" Visual Studio and have always found it very jarring -- but I mostly don't use windows much, and that always feels like it has a way of thinking that is...very orthogonal to mine. KDevelop has honestly been the best "not in your face" IDE I've personally found. I mostly do not do any MacOS programming, but when I have occasionally played, I've realised that xcode is brilliant but it seems like it utterly reinvents itself every major revision number. Eclipse, something I rarely-if-never used has sunk from a popular zeitgeist without a trace. Many of my colleagues (in a medical imaging setting) use jupyter, RStudio or Matlab, occasionally venturing into julia.

The winds shift, and the ship of progress sails a direction that is difficult to chart, but vim....vim never changes, and for that I am very glad.

I'm not really an IDE person but I find native Netbeans projects easier to understand than Maven ones.

Cant we make this argument for any level of abstraction?

A lot of people start out with shells and end up being unable to understand how to construct a switch or adder in logic gates. A shell is a great tool but shouldn't be a crutch.

Emotionally, I actually agree with this. I'd love for at least a basic understanding of computers down to boolean logic and the very basics of semiconductors up to the OS level and beyond to be "widely known" by programmers and "computer literate".

At the same time, I know that's extremely unreasonable to expect. I guess for the shell and editor, you can argue you ought to understand it stripped of abstractions, since it's actually the level you're working at, while most people don't work at the logic gate level.

As an EE and compsci major, I loved knowing how to form a transistor with n and p types, how to form NAND gates, how to deal with clocks and timing (ok, that’s a lie), how to make vhdl turn into fab rules, how cpu and system design influence compilers, how to write a compiler, various programming languages, np completeness (ok, eh), program design, architecture, distributed systems, machine learning (getting there).

The usefulness of knowing the whole stack has diminishing returns though.

It can be helpful in a large company where I can make connections that others can’t, but in a small company it’s less relevant unless I were solely focused on one specific connection (say, how machine learning using math as designed in an ALU inside a CPU or GPU is ludicrous—go analog and get four orders of magnitude speed up, all that time waiting for carry bit propagation, yuck!)

Your time is probably better spent learning to be a good generalist or specialist in your field, rather than knowing inside so many layers of abstraction.

Another book recommendation: "Elements of Computing Systems" by Noam Nisan and Shimon Schocken.

It doesn't go all the way down to the physics of semiconductors, but it goes down to logic gates and all the way up to tetris (there's a related course/website called nand to tetris: https://www.nand2tetris.org/)

If you want to go further down, check out Jeri Ellsworth's old videos where she built a transistor at home: https://www.youtube.com/watch?v=w_znRopGtbE

You may like "The Secret Life of Programs" by Jonathan Steinhart.

It starts at boolean logic and works up to a web browser and then some other stuff.

Edit: It's a high level covering of each topic of course, it does have to fit in a book. It's ~444 pages.

>Emotionally, I actually agree with this. I'd love for at least a basic understanding of computers down to boolean logic and the very basics of semiconductors up to the OS level and beyond to be "widely known" by programmers and "computer literate".

cf. https://www.pbs.org/show/crash-course-computer-science/

No, it won't give you the level of understanding that an actual comp sci/engineering degree will (after all, it's just a couple dozen ~10 minute videos), but it does touch on all the major concepts and through the series builds on concepts presented earlier.

> At the same time, I know that's extremely unreasonable to expect.

However, that's exactly the path every Electrical & Electronics Engineer classically took.

Yea, this was basically my computer engineering curriculum. Start at solid state physics, build a transistor, boolean logic, digital circuit design, cpu design, operating systems and device drivers, and then software engineering.

Most people in my class ended up at the top of the stack (operating systems and software engineering) since the jobs are more numerous and the pay is better.

> Cant we make this argument for any level of abstraction?

We could also make the opposite argument for any level of abstraction: can you really say that someone who buys apps for their iPhone and runs them is less of a programmer than someone who actually programs? If programming is just getting a job done, and being a good programmer is just picking the right tool to get the job done...

The real question is pragmatic, not theoretical. Does your dependence on a tool sometimes make easy things impossible or encourage misunderstandings about lower level processes that lead to bugs or inefficiencies? Is it simply too big or expensive to run in all of the places you might want to program? Does the tool make up for that lack of flexibility with increased productivity? Those are real questions that you can ask about any specific tool (including the shell.) It doesn't mean anything to ask them about tools in general, and the idea that sacrifices and benefits must all come out even in the end is just the law of averages.

Absolutely true! Ordered an FPGA dev kit last week for this reason.

Perfect world, you know everything.

In reality, you can't and don't need to. Knowing some shell stuff will be useful for most if not all programmers.

You know nothing..the vhdl to lut transformation (and the content of the bitfile) are totally locked trade secrets.

I got the QuickLogic dev kit - it uses the fully open source Symbiflow toolchain.

I think my chances of ever understanding what Symbiflow does are quite small, but that is a shortcoming on my side :D

And even if I do, I have to learn semiconductor fabrication next!

I entirely agree with you there is a time for the shell and a time for an IDE and knowing when and knowing both is a huge win.

My comment is more addressing the article where in the author mentions vim with no extensions and grepping. IMO that's often the wrong tool for the job (writing code). It can be a useful tool but there are often better tools for that job. Knowing when to use one vs the other and therefore knowing both is very useful.

Agreed. IDEs were how I was taught in class, but discovering shell tooling on my own removed the magic and made programming a transparent process.

I didn’t really started out with IDE I become overwhelmed by using an IDE its just my personal preference but I love using text editor they are very simple and minimal

My understanding, as an exclusively nix developer, is that if you’re a Windows dev, all you really have are IDEs. Yes, I know about Cygwin and WSL and the rest, but if you’re a run of the mill Windows developer then your life is centered around Visual Studio and NetBeans and PyCharm and maybe Powershell and maybe maybe cmd.exe if you’re a greybeard.

That vim and bash and all of these terminal-driven ways of developing are the unique privilege (and I do say that unironically) of working with *nix systems.

That’s not to say Windows devs don’t use vim, but in my (limited) experience, it’s about as common as using a gas-powered generator to charge your Tesla.

I do like my IDE but my one complaint with leaning on it a lot is that it lets you write code that is harder to understand if you don't have that IDE readily available.

For example, if you name a member "flag" and you have several types with the same member, your IDE can tell you where this flag is used, so it's not a big deal if you wanted to refactor or are trying to track down a bug. But god help you if you're looking at your code in a diff viewer or in your browser in an online repo.

I'm not really sure of your point. I can't imagine naming something "flag" but I can imagine "enabled" or "visible"

In my experience it's not common to name members/properties taking other classes into account so I might have a Window class with "visible" and Shape class also with "visible" and Player class with "visible". I don't think I've ever seen a project where they made those "someWindow.windowVisible", "someShape.shapeVisible", "sompePlayer.playerVisible" just for the sake of searching regardless of if they person is using an IDE that could tell the difference or just notepad/nano.

All of my experience is they would all be just "someWindow.visible", "someShape.visible", "somePlayer.visible". In other words, that choice has nothing to do with IDE or no IDE.

Is there some other example you were thinking of?

A better example might be perpetuating typos (and other less obviously wrong bad names), the number of times I've seen some variant of "setIntercetpor" or "closeDataSteam" or... anything that gets actually typed as "most of the first word plus completion."

Intellij would mark those with squiggly underlines and even jump there in an otherwise error free file when you press f2.

Your example sounded like results produced with editors that do not understand the underlying language. A proper IDE would not prompt you with closeDataSteam if it's not a member function of the object to which you want to close the data stream.

Right, the typo was done once and from then on the IDE offered it up for auto completion.

I think it also comes into the language design. If you expect programmers to use an IDE, then language features and best practices can require large amounts of boilerplate, on the expectation that the IDE will be generating that boilerplate. For languages expecting no IDE to be used, the language itself needs to avoid that boilerplate.

1) I'm not going to teach Jenkins to use IntelliJ. I want my build step (whether gradlew or make) to be runnable in the same way as it runs in scripts. I get to set (and read!) env vars when they're right there in front of me, not hidden away behind dialog boxes.

> 1000x faster than opening a shell

Don't close the shell.

> trying to grep all the project files for word phrases which, not being language aware, can't tell if one 'foo' is relevant or irrelevant from another 'foo'.

IntelliJ is the gold standard according to my Java buddies and when I search for a class (by hitting ctrl-N ... go figure) I frequently don't get any results. Some kind of misconfiguration will silently make things not work. It happens enough that I have to second-guess its results, even when they're correct. Now I only use ctrl-N to jump back and forth between classes I know exist. Actual searches I'll leave to find/grep/ag.

Likewise when I optimistically right-click some part of my project and click 'Run all tests'... "No tests were found".

Regarding Intellij and looking for classes / running test: I think this mostly depends on whether you're casually browsing some random project, or if you're gonna work on the same project for several weeks.

It's fine to try to open the random project in your favorite IDE and see if it just works. If it doesn't, and you just need to figure out what the code does, fix a few lines of code, going back to a text editor, grepping and trusting the CI to run the tests itself is just less overhead.

If you work on the same project for a longer period of time, you can easily justify the upfront cost of configuring your environment properly (and have your class searches and 'Run all tests' running).

On top of that, his opening example is so easy if you use a GUI for Git it's mind-boggling.

It'd literally be click status column to sort, click top entry, shift-click bottom deleted file, right-click, restore. Done, a couple of seconds without even thinking about it.

And somehow this simple task in a GUI inspired a blog post about how useful the shell is.

I'm not going to deny a shell is extremely useful, and knowing far more than I do makes you a better programmer, but there's also using the right tool for the job.

The point of the article clearly is about how a few general purpose shell tools can combine together to be the "right" tool in so many different usecases for which they weren't specifically designed for. You can argue for more specialized tool, and that's okay. But saying just "this" task inspired this blog post clearly misses the point of the article, which is generality.

In any case there are like a bazillion of GUI git tools. Are you certain each and all of them support this feature? If someone doesn't know any of them, what are the odds the first GUI tool they install has this feature? The hidden cost of finding the "right" tool is overlooked in its advocacy.

And there are a bazillion CLI tools, you can't use that defence.

That relies on there being a sufficiently versatile gui which isn’t guaranteed. Here’s some features that your example requires which I often don’t see in guis:

1. Tables you can sort

2. Tables you can sort on the thing you actually care about

3. Being able to select multiple entries and act on your selection

It’s not too hard to imagine criteria which could make the problem much harder for a gui, e.g. only doing the operation in a subdirectory (if the sorting is stable this isn’t so bad), or only on files not in a subdirectory (stable sorting won’t help here, you’d need the gui to let you eg search for / and then remove those things from your current selection), or only operating on .h files (you’d need a search->select feature again), or doing an operation that the gui doesn’t support on multiple files, or passing the list to some other tool (maybe you want to search makefiles for references to the deleted files).

Something that a gui may be able to handle but that often screws up shells would be file names with newlines or spaces in them.

It might not be hard to imagine it, but it's not actually that common an occurrence.

When's the last time you accidentally delete 200 files and needed to restore them? Probably because you were using a shell...

Oh yeah, and if you accidentally delete folder in a GUI? Usually you can just press ctrl-z, undo!

In the end, your are gonna write regex either way :D

Now do it across 5 projects. Or help explain to all your teammates how to do it.

With the shell, the second time you do it is a few keystrokes, if even. Wanna do it across multiple projects, it’s super easy without ever loading any of them in your IDE.

Wanna share it with teammates, you simply need to pass a script file to them, and they can do it without having to do anything.

And now finally if you need to incorporate this behavior in your headless CI? It’s the same script, instead of having to putz about with the CI configurations or whatever.

a) It's not a common task

b) a few keystrokes === a few clicks

> Done, a couple of seconds without even thinking about it.

This was originally done in a couple of seconds without deeply thinking about it (hence originally using grep instead of `awk '/re/'`).

(Edit: quote)

I strongly agree with this narrative and it matches my own experience. I spent 9 years of college and then grad school using exclusively either vi or emacs because I believed IDEs were a crutch that weak engineers relied upon. It wasn't until I got a real job that I found myself forced to learn a proper IDE (IntelliJ).

It blew my mind just how powerful the IDE was and how it massively increased my productivity. Putting aside code completion and code generate, I found it much easier to navigate a codebase using a proper IDE that fully understood the language. When I contrast that to my previous solution, a highly customized emacs configuration with lots of packages and my own functions, the two just don't compare.

I think part of this is that the people who use IDEs then prefer the actual code to be dumbed down to the point where the IDE can both understand it and then is required to manipulate it; I hate programming in Java because it is a language kind of designed to make it hard for me to build my own rapid abstractions... which of course would break the ability of the IDE to parse it at all much less refactor it; but, I then argue that the more powerful code abstractions obviate the language flaws that required "refactoring" in the first place. And to me, that is a way in which I feel like IDEs are awkward and harmful, but it is almost more at an ecosystem level than at the scale of a single person.

You have it totally backwards, people who use IDEs don't like "dumbed down code" for the sake of the IDE. Nor are languages like Java designed with their IDEs in mind. The IDE is designed with the full language spec in mind, understands the language at the AST level and supports the full set of language features and all their permutations. A developer who is writing code in IDE is utilizing every permutation of what the language provides and it is upto the IDE to understand it.

IDEs for the most part are not covering for language flaws, they are helping developers be more productive by removing the need to focus less interesting parts of the process and focus on the the problem being solved. While the need for an IDE is more important in some languages over others, there are a set of common problems that exist in every language that the IDE can help solve (navigating the codebase by jumping around to symbol definition/usages, auto importing packages/modues, intellisense, refactoring).

Refactoring does not mean going back and refactoring 6 weeks worth of work that you seem to have in mind. Most people who refer to refactoring are referring to it in the context of multiple small refactors during very short development cycles (every few minutes or hours). My workflow and the workflow of many developers I have worked with goes like this:

* Write some code (not more than 200-500 lines)

* Write tests

* Refactor if needed- potentially rename variables, potentially extract some code into methods, potentially pull stuff up into constants, potentially introduce a new class or interface, potentially change types of certain members, potentially visibility of certain members or methods.

* Repeat for the entire workday when you are not in meetings.

Refactoring- specifically the kind of small scale refactoring that you do as go through your workday to me is almost as important a part of the development process as anything else and IDEs remove almost all friction from it by removing any cognitive load of the task from the developer.

Which powerful language do you use that can't be understood by the IDEs. Don't you think that if your compiler can understand it, the IDE can too? :)

I've found that IDEs work well when you're working within a single language or framework, but are clunky for multi-language projects. It's easier for me to have 4 terminals open (2 for vim, 2 for execution) than two different language-aware IDEs. With the IDEs, I need to pay attention to which IDE I'm in since they have different shortcuts and UIs.

Which languages do you mix and match that something like IntelliJ IDEA can't support?

Opening a shell is not an obstacle when you've already got several open. Also, not all of us are refactoring huge codebases or using verbose languages that require a lot of handholding. The hard part of development is typically not at the typing or syntax levels, and I have the stdlib of my favorite libs close to memorized already. The most helpful ides tend to be massive as well.

Tools can be useful however, such as the hot reload of flutter/dart. That's the kind of improvement I really appreciate.

Regarding the hat part not being the typing - yes!

Every tool, plugin or functionality takes up some space in my mind. Once I know the tool really well, that space becomes smaller and the benefit of having the tool outweighs the cost. This is why I only add tools and plugins slowly. One at a time, only after I mastered it I add something new. And I weed out my tools from time to time and remove things I don't use often enough.

Maybe the people who use feature rich IDEs just have more RAM and a more parallel brain, and the CLI people have serial brains :D

It wasn't clear from your comment if you're aware of this, but Flutter hot-reload works from the command line too! I do all my Flutter development in 3 windows (vim, `flutter run` shell, android emulator).

Glad to hear, I've barely started with it.

> they'll show me all references, take me to the definition or declaration, stack those jumps so as I follow the links I can pop back a level to where I was. All of this is instant.

I do this in stock vim using ctags files. You can generate them with anything, though “exuberant ctags” is very popular. Ctrl+] when over an identifier jumps to its definition, and there’s a tag stack to jump backwards similarly (you can also use the jump list). This too, is instant. It’s also an essential feature of the help system in vim for navigating cross-references.

There’s also other robust IDE-like features built-in. Syntax-aware code folding (with adjustable fold levels, see ‘folding’), compiler integration so that it’ll highlight where compilation errors occurred (see :make, makeprg, errorfmt) and so forth. Not saying your life won’t be better without some plugins, but stock vim is quite a bit more powerful than having to shell out and grep.

The point of IDE is that there is almost 0 configuration. You get everything by just clicking "install". Y you can configure vim or emacs to be as expressive as IntelliJ. But the time it takes is much larger than 0. And when you switch to another language you'll probably need to repeat many steps.

The point of the OP is to not configure vim or emacs to do all that but to simply use the UNIX (or platform equivalent) shell.

I've been using a lot of typescript lately. The ide support in vscode/tsserver is very complete, but it is NOT fast. As an example, it frequently takes multiple minutes to use find references. Rg or git grep with fzf are both hundreds of times faster, and are easier for me to navigate. (Obviously language aware find references is much more powerful, but with carefully named variables it's acceptable to just use rg.). Sometimes, even a simple "go to definition" results in a 45 second hang. (This is on a 2019 Macbook pro)

I've also used intelij idea with Java, and while it was much smoother than anything in the typescript ecosystem, it's definitely not faster than using grep.

> 1000x faster than opening a shell and trying to...

Not commenting on who is right or wrong, but there is a key difference in your workflow here. OP is not opening a shell at this point. The shell is the first thing being launched, vim is secondary.

> The difference is like using a hammer vs using a nail gun.

I feel like this metaphor falls short.

The IDE, IMHO, is more like a multi-tool with pre-chosen use cases, but everything fits together nicely.

Using the shell is more like choosing the exact tools for the job, and maybe connecting them together in a make-shift fashion.

Yup, like having a 9-in-one painter's tool vs having those 9 tools individually in a toolbox.

Usually, its super convenient to have all those tools integrated because you can scrape some paint and pound a nail and pull a nail all right then and there with needing to do any swapping out or hunting for the right tool.

But the instant you want to be able to hold a bolt in place with the wrench function while simultaneously using the hammer function, you're shit out of luck. The painter's tool, on its own, can't be de-composed to do those things simultaneously. Plus, it's not as good at being a general purpose hammer or a general purpose wrench. But in the context of the tool, you don't really need a full fledged hammer or wrench. Having a single small tool you can hold in one hand is usually more pragmatic, even if less versatile.

Probably still gonna want the toolbox to be available for when you need it, though.

How do you refactor, say, Java or C# or Typescript with Vim?

These days, language servers and wiring up one of the clients in Vim.

Have you used them? Are they reliable/stable/fast? Easy to configure?

Sounds like a reinforcement of Drew's point to me. Using the IDE to jump all over the place, and blast changes over several files at once, suggests severe and unnecessary non-localities and tight couplings in the code.

Maybe without the IDE, the code would have ended up a little cleaner.

> Maybe without the IDE, the code would have ended up a little cleaner.

When you can’t easily jump from file to file and get intellisense like completion you are forced to design modules and data models which can fit inside your head. I imagine this can be a feature in some cases and a limitation in others.

I tend to only use an IDE when refactoring, and a text editor for everything else. So far I haven’t had to write or work on code which I can’t keep all in my head; maybe I have just been lucky with the codebases I have worked on?

> I imagine this can be a feature in some cases and a limitation in others.

Technically true, but it's only a limitation in the sense that not being allowed to use asbestos shingles and lead pipes is a limitation for someone building a house.

A big part of the reason I gravitated toward Emacs is that it hits the best of both worlds there; by default, it's a simple editor, but it can be extended with the exact set of IDE-esque features with which I'm comfortable.

> take me to the definition or declaration, stack those jumps so as I follow the links I can pop back a level to where I was.

Know any C/C++ IDE that would do this? I transitioned from Qt Creator to VSCode and none of those have helped me with this cool nicety you mention.

EDIT: I should add that for me, exploring foreign code bases has been hugely improved since I discovered Sourcetrail, so I'd suggest having a look at it to anyone interested in code exploration tools: https://www.sourcetrail.com/

I've found Jet Brains' CLion to do a solid job of code navigation, but I don't currently work in C/C++ and only use it occasionally for reading code bases.

I do use Jet Brains other editors and Intellij has flawless Java code navigation (among other amazing features). Further, I've found PyCharm provides Python code navigation that is about as good as possible for such a dynamic language. Therefore I'd expect CLion to provide top notch code navigation for C/C++.

vim can definitely do it with CoC

> Depending on the language they'll show me all references, take me to the definition or declaration, stack those jumps so as I follow the links I can pop back a level to where I was. All of this is instant.

Vim can do this too and (rip)grepping the code is only needed if you're looking for, you know, text. I have 'gd' mapped to "go to declaration" and 'gr' to "find references". Works across dozens of languages, can be further scripted upon. Basic refactoring (renaming class method in several files) should also work, but I use it so rarely that actually can't testify about it (I use rope bindings for python refactoring but it's usually limited to a single file).

It's not as honed as purposely built 5GB IDE of course. But having one editor for all languages and syntaxes is really nice. Things like VSCode have this one too, but I wouldn't call them and IDE either.

I haven't been using IDEs any meaningful amount of time as I'm not a developer, but if the things you mentioned (go to definition, changing names) are the main reasons for an IDE, one could argue that terminal workflow has substantial benefits, like better understanding of the toolchain, simpler integration with ci/cd tools, endless customization, single tool across languages and other standard cons of CLI tools.

I do, however, find vim without extensions useless for editing code as well.

I agree.

But it's worth pointing out that IDEs provide depth for working on a given project. If you want to provide breadth across many projects (like writing a custom linting script), you do want to be she'll literate. Not every automation task can be specified in with grep and such, but quite a number of them can. And the ones that can't be written in terms of the shell often benefit from hybrid solutions that combine shell tools and specialty tools like parsers.

Nah, the issue is that whenever the IDE doesn't provide a shortcut for whatever you're back to negative one. That's why people like mechanisms (grep, sql or similar) over policies (here, whatever is given to you by the IDE), you can always adapt.

Other than that, a good deal of open mindedness and skillful use of IDEs is not a bad trait. Let's not be extremists.

> Nah, the issue is that whenever the IDE doesn't provide a shortcut for whatever you're back to negative one.

You are not back to negative one, more like you are at square 85. There is nothing stopping a developer from supplementing something that the IDE doesn't easily support. For example I use PyCharm these days and I when I make a change to a file I automatically want the test associated with that file to run- this is not supported in PyCharm (or atleast I don't know how to get PyCharm to do it) so I wrote a script that runs in the background of my shell watching for changes on files on my project and running appropriate tests.

The point if an IDE doesn't have a quick way of doing something that is necessary for your workflow, it doesn't mean that the 20 other things that the IDE does are useless. Any working developer should continue to be able to extract value from the features that the IDE provides and jump back to fill the holes themselves (via shell scripts or what have you) instead of saying "Whelp this means IDE is a crutch now and I must revert to only using 'Unix as IDE' and only relying on sed/awk/grep and friends" to aid my development workflow.

Maybe I was traumatized by eclipse. Anything to extend in it required a federal reunion and a taskforce.

The two worlds of editing general purpose text files, when compared to editing a large amount of a very specific kind of text file, look very similar.

I’ve never seen advocacy for one over the other as a competition. I do plenty of both types of editing, and often edit the same files simultaneously with three different tools.

(Example: Editing notes in Notable, Atom, and vim.)

I'd also argue that it's helpful to understand the technology underlying your IDE, if only to be able to debug builds. (Understanding that MSBuild exists, and can be called from the command-line, for example, is a handy skill).

Those tools are all great and I have that in Vim but I also have the whole shell environment to do many other things that have nothing to do with the content of the code.

For scriptable editors the difference is more like giving you a home depot vs a multi-feature nail gun. There're option to use other people's scripts (buy nail gun directly), or script yourself (buy parts and build your own nailing tool that suits your need, but do need to invest time in learning and building).

A literary man creates new meaning out of words. You can do this in the shell by combining tools in new ways. You can not do this easily with an IDE, it only has a predefined meaning. Using an IDE is not literature but transcription.

Well, 95% of software development is writing invoices, not poetry, so...

This speaks more to the narrowness of your view than it does about shell.

Example: I work with parallel filesystems quite a lot. Often I will need to parse through GB of logs spread across a cluster to track down a particular sequence of events for troubleshooting.

I could load up a fancy IDE with all the bells and whistles, one that instantly refactors all my 'foo', and still spend a (very) non-trivial amount of time carefully crafting (and documenting, and debugging) a proper application to solve the particular problem I'm working on.

OR - I could rattle off a pipeline using a few general tools and get the needed details in less than a minute.

You analogy about hammers is particularly amusing, invoking the old saw "If the only tool you have is a hammer, every problem looks like a nail". You take this to the absurd degree that you end up championing nail guns!

> "parse through GB of logs spread across a cluster" Why would you do that with IDE?

The OP is comparing IDE v.s. shell in terms of "exploring" and "editing" the code. In that case IDE is a powerful nail gun and championing it isn't wrong.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact