Hacker News new | past | comments | ask | show | jobs | submit login
Programs that have saved me 100+ hours (sadacaraveo.com)
389 points by dshacker on March 16, 2017 | hide | past | favorite | 203 comments



Create scripts to automate all kinds of things, usually triggered by keyboard / mouse conditions.

Here are a few - these are small things, but they can incrementally save hours and hours

- CTRL+@ - pastes my email address at the cursor

- ALT+MouseWheel - Page up / Page down

- ]d - send the current date and time to the cursor

- CAPSLOCK - sets transparency of window to 75 as long as caps is held down

- #t - open http://e.ggtimer.com

It's also set up as a universal spell-correct and intellisense for SQL/ JS / PHP etc independent of IDE




Set up jump locations and shortcuts so that you can quickly move between project folders in powershell




The original authors site isn't available anymore, not sure why because this was a super useful utility. Its like Ant Renamer or File Renamer or any of those, but I just prefer this one's simplicity




Bulk edit Id3 tags on mp3 files (if anyone has mp3s anymore)




Simple tool to repeatedly format text. Like foreach for plain text

BetterTouchTool (https://www.boastr.net/) is kinda like an AutoHotKey for OS X -- but on top of custom keyboard shortcuts, it also allows you to configure custom trackpad gestures.

It's easily saved me 100+ hours -- a half-second at a time, 400 times a day. And probably a couple cases of RSI, a couple inches of finger contortion at a time.

The economy of motion is as far beyond keyboard shortcuts as keyboard shortcuts are beyond hunting around with a mouse. It's crime more people don't use it.

For OSX I wound up doing the completely built-in route. > Create Automator Workflow Service > Assign Hotkey to service > service has one item: Run applescript.

I miss the days when applications came with proper applescript dictionaries. Nowadays I have to pull shit like

  tell application "System Events"
      tell process "Firefox"
          get menu item "Quit" of menu "File" of menu bar 1
      end tell
  end tell

I love BetterTouchTool, especially since it allows setting everything on a per application basis. I use it to have the same keyboard shortcuts for navigating tabs in everything, like browsers, iterm, finder, IDEs and more, it's so nice to have consistent shortcuts.

It's also really practical for programs that don't even support custom keybinds or when running multiple instances at the same time like multiple firefox profiles. No need to set the new shortcuts in every profile, they just always work on every running firefox instance.

It also has nice features for window snapping and resizing, I have two of my side mouse buttons mapped to window move and window resize respectively, whenever I hold one of them down the window under the mouse pointer will just move/resize accordingly until I release the mouse button, including snapping to edges and all that nice stuff.

BTT is one of apps I miss most whenever I use windows, makes customizing the user experience so much easier.

And for making keyboard input do things you didn't know were possible, OS X has Karabiner (http://pqrs.org/osx/karabiner/). Advanced stuff has a steep learning curve, but the author's really helpful.

TIL that BetterTouchTool can be used to configure keyboard shortcuts.

What are your favorite or most-used shortcuts?

This is a relatively fresh reinstall, and I've only been adding gestures as they come up [0] -- so this should be pretty representative of my top usage (slightly reordered for organization):



I think I can say with a fair amount of confidence that 3 Finger Tap (open in new tab), Pinch In/Out (close/open tab), and Rotate Left/Right (change tab) are my most used custom gestures.

With 3 Finger Swipe Left/Right (back/forward) configurable in vanilla system preferences, but done in BTT for consistency, and 4 Finger Swipe Up (Show Desktop)/Down (Mission Control) with their directions reversed from the system defaults [1].

Really, every one of the gestures on there is pretty indispensable to me feeling comfortable using a computer. Going back to keyboard shortcuts for the same actions (eg, when I'm using someone else's computer) feels like hunting-and-clicking menu items with a mouse.


[0] Import/Export is a thing, but I usually use a reinstall as an opportunity to tighten things up.

[1] Because this way makes way more intuitive sense to me. Swipe Up = Swipe Away = "GTFO windows". What hell, Apple HCI engineers?

I enjoy BTT bunches and use a non-apple keyboard/mouse.

Rebinding the f1-f4 keys to tab navigation (previous, next, close tab, new tab) works well. Rebinding extra mouse buttons to get gestures back on non apple mice (tilting scroll wheel to smart zoom). Other setups would be neat to hear about.


I love the AutoHotKey community. People have created completely crazy things (like a fully featured IDE for AutoHotKey) in something that so obviously is not suited for the task. It's great.

Awesome AutoHotkey list


A curated list of awesome AutoHotkey libraries, library distributions, scripts, tools and resources.

Mp3Tag is a pretty amazing piece of software.

I have a bit of OCD when it comes to micromanaging my music library, and the bulk editing and automation features in Mp3Tag is the the only reason why I ever have time left to do anything else.

Me too! I wince if I find a folder with names in the wrong format, or no album artist

Is there anything like AutoHotkey for Linux?

On Linux it depends on your desktop environment, I suppose.

Or if you just use a window manager, like i3wm for example, you can define all sorts of hotkeys, which you can also bind to custom shell scripts, or anything you like, in your i3 config.

Edit: I used this a while ago, when I was using bspwm as my window manager: https://github.com/baskerville/sxhkd

I'd have to throw most of the standard UNIX utils in there: grep, awk, cut, sed, sort, uniq, and of course, vim.

Outside of the tech world, people seem to think that grabbing some columns out of a file and rearranging them or pasting them somewhere else is some kind of sorcery.

One unix based utility that has saved me incredible amounts of time is 'vimv'.[1]

You run vimv and you get your pwd, in the editor, and you can edit it like a text file. Then just quit and save and all of your filename changes get committed to that directory.

It's an extremely fast way to do a bunch of random and irregular edits to a big directory full of files.

[1] https://github.com/ivanmaeder/vimv

This sounds similar to 'vidir', which is available in Debian/Ubuntu in the package 'moreutils'. Indeed very useful for editing large directories.

Interesting. It's packaged for macOS, too. Install Homebrew via https://brew.sh/ then type

$ brew install moreutils

Or wdired mode in GNU emacs

That's amazing. I think I've been looking for this tool for... many, many years. Thanks!

Dito for CentOS.

Wow, this is really cool! Wish I had known about this sooner!

I considered to write about these, but

1) some guru out there has probably done a blog post on how to automate things with all of these.

2) I wanted to write something that could help anyone, not only people who are well versed in tech.

But I agree, most of the renaming/sorting/parsing in general, can be done with unix commands.

"1) some guru out there has probably done a blog post on how to automate things with all of these."

… in 1982, before the word "blog" even existed - and the advice/instructions still work just fine on any modern vaguely posix/unix-like command shell, including Mac and Windows which also didn't exist when the piece was written.

(Gopher search in "Unix - Utilities - Tutorials - sed and awk - brian kernighan" or dial in to your WELL bbs number and ask for a copy of "sed-awk-tutorial.txt" in the Software conference...)

I'm just a lowly WordPress developer but I live in on the commandline ( iTerm and Emacs on osx ).

The dev lead at an agency I worked at used to tell me using such old tools was going to hold me back. I've found the opposite to be the case.

I'm by no means an expert but the even my basic skills allow me to do things that they couldn't with fewer key strokes.

They also didn't complain when I created some bash scripts to automate pushing development databases around.


That dev lead was fundamentally wrong and you did well. I dare say that as a Wordpress developer, knowing your Unix tools you have a significant advantage compared to the rest.

there is something incredibly beautiful about the command line, and being able to pipe stuff through those things at lightning speed, finding exactly what you want. its so pure, and so useful. and its fun to look at the source for those too. ive gotten alot of mileage out using the basics + jq (https://stedolan.github.io/jq/) to inspect data.

And in the tech world. I had a sweng government contractor making six figures ask me how to sort lines in a text file.

Three of my forty coworkers actually understand the command-line on a level nearly on par with me, and I'm not an expert (locally, yes; globally, hardly). It's been frustrating me since my first real job in industry and others in government. I think I got spoiled in college by being surrounded by (mostly) curious tinkerers who had no problem delving into the multitude of tools that were available and exploring them. Post-school, people seem to lose that curiosity, if they ever possessed it, as a group.

I've been in job situations where just picking up a handful of hotkeys for a program is seen as Deep Wizardry. Most people just never learn how to do things like that.

And this is why 10x programmers are a thing.

Maybe it's not losing curiosity, but changes in priorities.

IME: I had much more time available to spend learning tools like that. Now that I work, have a family, etc, I don't have as much time to spend.

Learning tools like that should be part of your work imho.

Sure, a part of it is. But I also have a lot of stuff that I have to get done at work as well, that I didn't have to get done at school.

It sounds like you took the wrong job. I was surrounded by curious tinkerers in school, and I'm very happy to say that at my first two jobs I have been surrounded by curious tinkerers.

Maybe you're just a better person than them.

Possibly, but doubtful. I suppose really it's the unwillingness to learn (by otherwise educated and intelligent people, so they have or at least had the capacity for learning at one point). This is just a glaring example in the programming field. We literally program computers to do whatever we want (modulo performance or physical limitations), why would our tools be any different?

>I'd have to throw most of the standard UNIX utils in there: grep, awk, cut, sed, sort, uniq, and of course, vim.

Agreed. I'd just add that there are many more tools along the same lines (just have to look in the various bin directories, and not to forget custom ones that people write in C / shell / other languages and combinations of the same), and that it is how they work together - because of a few powerful key concepts like pipes, I/O redirection, file wildcards, the various metacharacters starting with $ and others - that really makes the system powerful, more than any individual tool, though those play a part too.

And for images, there is netpbm: http://netpbm.sourceforge.net/

Or ImageMagick. On macOS, install Homebrew via http://brew.sh/ and then type

$ brew install imagemagick

Fantastic stuff although I can never remember the commandline options.

I came here expecting someone to recommend 'make' - and some others to link to interesting make resources. I'm kind of surprised no one has.

Make is a utility that's seemed super useful to put into use for non-programming tasks. In my case: take a LaTeX file, create a PDF from it, run tex2html on it and ftp the results up to some web server, run pandoc on it to create a dated epub, etc. But my initial forays into learning make were frustrating and I gave up. Is anyone using make to automate workflow processes, or is that wishful thinking?

The thing about make is that it's a tool for reducing how much computation has to take place when an incremental rebuild takes place.

This is done at the cost of great obfuscation of the build process.

If you can suffer the cost of a full rebuild each time you change something, it is far clearer to have a linear script which executes certain steps one by one.

Such a script can be sped up for incremental builds with some judicious checks like "run this command which makes A out of B, only if A doesn't exist or is older than B".

(P.S. I don't want to overlook make's ability to parallelize builds on multiple cores. That can just be scripted too, with utilities like GNU Parallel.)

Make doesn't really force an incremental build. It's also quite trivial to have a semi-incremental build, where the build unit is libraries not object files.

I'm yet to see a tool that doesn't become just as obfuscated for complex builds though, you're script for instance (bash? python?), is probably just as illegible to someone new to it.

Make doesn't force an incremental rebuild, but that is what it is for. It lets us write a set of rules, which are evaluated as one big whole to determine the minimal set of actions that has to be taken to bring every implicated target up to date. That is pretty much the definition of "incremental rebuild".

If a program is made of libraries, such that when we change a single source file, the entire library which contains the .o has to be re-archived, and then the entire program made up of all those libraries has to be re-linked, that is still an "incremental build".

You might possibly be mixing up "incremental link" (optimized way to just update a function or object file in a previously linked program image) with "incremental build" (compiling only parts of a program in response to a small change).

It is also a bunch of scripts in the one known file, and these can be dependent, programmed (if/for) or included from other sources. If you don't care about what it is mainly for, you can see the value of its own.

Honestly, I'll never go into some .sh file to see what happens there, but often open Makefiles to find and tune details of anything. It... adds something manageable(?), I can't find a word for it.

Of course I use makefiles for tasks not connected to building projects. Not "a lot", but if I need reproducible actions in workdir, then it is Makefile.

Edit: if/for flavor

> It is also a bunch of scripts in the one known file, and these can be dependent, programmed (if/for) or included from other sources

So it's not just one known file. Big trees can have lots of little makefiles.

> I'll never go into some .sh file to see what happens there, but often open Makefiles to find and tune details of anything.

Thanks for sharing. My own personal preference is such that I will never go into a Visual Basic file, but I will go into a Lisp.

You do know that the recipes in Makefiles are shell scripts? (With a subtle difference: each line implicitly executes in its own subshell, so setting variables or doing "cd" doesn't propagate). In a Makefile you still have to read shell code. It's mixed with make code, and with additional quoting rules and where true multi-line shell scripts that run in a single shell have to be written with backslash escapes; ugh.

(Note by the way that I didn't say anything about specifically using shell scripts for builds; I deliberately used the word "script" without qualification.)

> Not "a lot", but if I need reproducible actions in workdir, then it is Makefile.

If you want irreproducible parallel build problems, that's when you want make.

Sometimes Makefiles depend on execution orders which are not actually encoded as dependencies and these non-asserted orders turn into race conditions in a parallel build. You get a situation where in one out of twenty builds, that "y.tab.h" file out of yacc didn't get generated yet, while the lexical analyzer that wants to #include-s it is already being built. In single threaded mode, it just so happens that the rule which makes "y.tab.h" always runs first.

I once created an embedded Linux distro from scratch (build and packaging system and all). Over the base of supported packages we were pulling in, I saw quite a number of build issues of this type when the distro build would randomly fail to build a package due to race conditions like this.

GNU Make doesn't have any diagnostic ability that I know of to help you uncover where rules have unexpressed dependencies (a recipe refers to an input file which is the output of another rule, and that input file is not named as a prerequisite).

"Race conditions are hard"

I've been too lazy to learn make, but I pull a python library named doit when I need to do automation that is more complex than a linear shell script (dependencies, etc).


And how many hours have been lost by everyone and their dog re implementing make?

I use make for scientific data analysis, and it has saved me a tremendous amount of time. It's definitely worth the investment to learn the language, in my opinion. I know many folks have tried writing their own version to "fix" the arcane parts, and maybe they're better in some use cases, but make is thoroughly documented and tested and quite general purpose.

I tried this once to automate a several part procedure. After spending a couple evenings, I decided it is hopeless. make is coupled tightly to building projects, shoehorning non-build workflows into a makefile is both awkward and incomprehensible to colleagues.

Well, personally I would just use a shell/python script to do the same. I dont see the point of learning an arcane language for something any language can do easily.

Why have Python when you can use Shell to do the same? Why have Shell when you have C? Why have C when you have ASM?

All of those increments make particular tasks easier. Makefiles eliminate the need to write 'if this doesn't exist or it exists but is older than its dependencies' over and over again. The built in targets also make building C programs quite trivial.

Makefiles can get nasty, but for simple chains of "if this file has been modified take this action" they're a great tool. The syntax isn't even that hard for those types of actions.

    target: dependency ...

Man, I'm pretty convinced that Vim macros have single-handedly saved me years of my life. They're so incredibly useful that it's made Vim one of the first things that I install on any machine.

Same with Emacs keyboard macros. I use them very often to do similar things to many lines. Usually they involve some combination of search to move the cursor, word-based movement, cut and paste, and insertion. It's such a great feeling to define a one off macro in a few seconds and then rip through a whole file!

If you use emacs keyboard macros regularly and find yourself ever re-recording them because you messed them up just a little bit, try out `C-x C-k C-e` (kmacro-edit-macro) sometime. It's life-changing.

Am I missing out by not learning Vim? I know enough to get by for git purposes (renaming all picks to s is amazing), but I feel like I'm really productive with Webstorm. There's a ton of great shortcuts and great features (multi-cursoring, intelligent refactoring, customizable search directories, integrated terminal, etc.) that I feel outplay anything I could do with Vim.

Webstorm (and all IDEA-family IDEs, as well as most IDEs in general) has a fantastic Vim plugin that allows you to use vim bindings within the text-editor of the IDE. That's what I use and it saves me so much pain.


FWIW as a 'real vim' user I typically find these plugins severely lacking.

For instance, I've recently tried the ones in Atom and VSCode and they were both incompatible with my normal workflow. In one of them you couldn't save files with :q, for instance.

They're works in progress, likely--and I'm still grateful to the teams who work on them. They just weren't enough for me.

I completely agree. Perhaps things have changed for some of these plugins since I last tried them, but the only one that has not only basically matched Vim but exceeded it is Evil for Emacs.

I've found the IDEA Vim plugin to be good enough for most of my work, but the macro support isn't great. Specifically, I miss running macros on multiple lines with `:norm@q`.

The Jetbrains plugin is really good. So far I've found that it covers everything I use. Saving is not an issue with Idea since it autosaves and keeps a local history of your changes as you work.

Not to mention, that :w works in intelliJ as well. (and so does :q, to close a tab)

Oops, yes, :w is what I meant.

^o is always the one lacking for me.

>Am I missing out by not learning Vim?

You may be, though ultimately it depends to some extent on personal preference.

If you want to check it out, try this vi quickstart tutorial by me:


I initially wrote it for a couple of Windows sysadmin friends at their request. They had been given a few Unix machines to manage. They said it helped them quickly get up to speed with the basics of vi. I later published it as an article in Linux For You, a print computer magazine (now Open Source For You).

No. I used vim for half a year and then went back to notepad++ and similar for plain editing. The bottleneck in programming is thinking anyway, and not typing. It does not make a difference. For serious coding, I recommend a serious IDE.

For me the bottleneck is finding where I need to read something to understand and then going back to where I need to add or change something.

An IDE with half-decent code browsing capabilities, then. Like VS with ReSharper, or IntelliJ and derivatives.

Any that don't suck at windowing? They all want to show only one, or at best two files at a time. They're primitive compared to the windowing capabilities of vim/emacs/tmux.

No, not really... prior to 2017, that was one of my biggest gripes with Visual Studio - opening multiple editor windows in one instance would quickly chew up enough memory to hit the 32-bit process limit and start page thrashing. Why, I have no idea, but that's what I've seen consistently the past few years in VS 2013 & 2015.

Does it do better now? Can you have 12 windows arranged at once? If so, please tell me how because I spend most of my day in VS. but time and time again I need to open shit in vim, just to get a clear overview.

Far as i know, no. I was settling for having 2-3 editor windows open at once. 2015 was such a pig I avoided it. 2017 can handle it so far. I wouldn't bet on big solutions 100k+ lines of code though.

multiple cursors: is nothing compared to the confirm flag in `:%s/foo/bar/gc` + columnar editing in conjunction w/ something like tabularize

intelligent refactoring: is extremely unlikely to happen soon

customizble search directories: is way more flexible in a terminal than anything i've ever seen an IDE guru do. it's not vim though. it's vim + shell.

integrated terminal: you're already in the terminal, using tmux doubles down on the benefit

vim has way more flexible buffer pane layouts + tabs + sessions (IDE's also have sessions, but vim can do multiple sessions for a project). i've never seen an IDE with anything like the jump stack, marks, multiple yank registers, the flexibility of macros, or anything like `:r !$COMMAND` or for that matter `:w !$COMMAND`. basically no IDE has anything which matches vim's `NormalMode`. type aware code completion and syntax checking is available depending on the language.

these things aren't free though. you have to invest before they pay out.

> multiple cursors: is nothing compared to the confirm flag in `:%s/foo/bar/gc`

Why is that? I'm a long term vim user, and I prefer multiple cursors to 's///gc' for many operations. Unfortunately the multiple-cursors implementation (via plugins) is not flawless, but it's quite handy for standard usage.

`gc` makes it much more flexible and faster if i know i have a small number of "exceptional" matches, especially early in a buffer since it offers you `y/n/a/q`. i also find it much faster than clicking potentially dozens of places in the code vs spewing `$ROUGH_REGEX` then mashing `yyyynnyyyna` to make edits in 70 locations.

Depends a lot on the work that you have to do I guess. I can spend a lot of time on remote systems, so learning a powerful editor that I can rely on to be there was useful. And all linuxes usually have vi or vim.

But if you don't do that sort of thing, there are plenty of other tools that you can use. Most are probably far easier than learn vi.

But I like using command line tools where possible. Most are available everywhere and they are incredibly powerful once you learn parts of them.

vim, like emacs, is a programmable text editor. Maybe $your_favorite_ide totally outplays vim (or emacs) at some feature, or maybe someone has written a plugin that has an equivalent (or better, or somewhat worse but acceptable) version of the feature, you'd have to look if that's your main criteria. But also note there are many plugins that don't exist in other editors, and of course for vim there's the various macros/functions/key maps that exist in users' memory or .vimrc files that they wrote themselves or got somewhere that solve all sorts of tasks, general and specific. Anyway, being programmable, you don't have to wish the makers of the editor made some feature you saw in some other editor, you can just program it (or convince someone in the community to do so), plus there's all the various tasks of text editing that really benefit from being able to automate something this one time. If you don't care about programmability of your editor, you're probably not missing anything.

The JetBrains apps actually have a very good Vim plugin, so you could really get the best of both worlds.

I view Vim as half-editor, half-editing philosophy. The editor is just an app that I like, but I feel that embracing the philosophy does make me more productive and able to accomplish a lot.

Out of curiosity, what kind of macros do you run in vim?

I had a recent refactor which was basically "take a 5-argument function call split over many lines and replace it with a different 1-argument call". I made a macro that found the code block I wanted, edited it to the desired syntax, saved the file, and opened the next file in the list I had opened. I opened vim for this occasion with `vim $(ag -l STRING_IN_REFACTORED_CODE)`, which used search results as a list of files to open with vim.

The most fun vim macro I've made solved a "solve this puzzle before we want to interview you" question. The problem was a URL to request that gave you a response to parse into the next URL, chained about 100 times. The vim macro took the URL in the current line, shelled out to make a network request for the result of `curl` with that line as an argument, and edited the result into the next URL to fetch.

Was the puzzle for Rainforest?

A very common thing that happens to me is that I have to take big blocks of JSON, and map all the keys and values into something else, like HTML or CSV.

Without macros, this traditionally involves a lot of copy-paste-change-stuff repetition.

With a Vim macro, I can happily just do this once, assign it to a register, and repeat as needed, and even store it for later reuse if I want.

There are also, littler things, like jumping between HTML tags, or bulk-indents.

What makes Vim's macros better (in my opinion) that other editor's macros is that fact that they're simply stored in copy-paste registers, meaning that you can have many of them, store in a different file, copy them off the internet, or compose them together.

Not op but here they are really handy for creating and formatting repetitive code blocks. I find it particular useful when making testing\throwaway code.

For example, I may be testing a function and wanting to test a few scenarios on many IDs. So I will create a csv list of numbers and then use a macro to format it

your csv string will be like this (normally much longer)


Then record the macro:

- type "myFunction("

- type ctrl + right arrow (this moves the cursor to after the comma)

- type backspace

- type ");"

- type enter key

then run stop recording and run the macro. It will produce:





Which doesn't seem like a big time saver over regular copy and pasting. And you could have iterated over the csv string with a for loop. This is probably a really contrived and basic example but you find yourself thinking about using a macro anytime you have to do the same keystrokes over and over.

It's ancient now, but I put together a video 4 years ago demonstrating a simple macro task: https://www.youtube.com/watch?v=eWfBWg8bVTQ

My favorite use of macros is as a sort of progressive global search and replace on steroids. I have @q bound to Space, so I can record a macro with qq, then hit space to repeat. Since the macro is just a list of commands in a register I can copy it out and edit it to tweak it, once I'm happy with it over a few lines I can make it recursive by adding @q to the end of it.

Compared to regex search and replace this is much more malleable, interactive, and ultimately powerful.

It's worth remembering that . (i.e. dot) repeats the last edit command. vim-repeat ( https://github.com/tpope/vim-repeat ) makes this even better.

Although, there are still several situations where macros are better, especially things that involve several motions and edits. (I tend to use qw because w is more convenient to hit after @).

Also, it's occasionally useful to remember that you can use the double quote character " to paste a macro, edit it and then update it. i.e. ( "ep .... edit ... ^"ed$ )

Also, vim-surround is one of those things that, once you've used it for a while, it's impossible to think of editing text without it: https://github.com/tpope/vim-surround

Any vim user can profit from visiting https://github.com/tpope?q=vim&tab=repositories and trying any one that looks vaguely interesting.

Somewhere between the default and your binding: @@ repeats the last macro. So you can record your macro, go to your search text, @q, n, then for all subsequent matches @@.

A common one is "here's an excel file, can you put these entries in the database". With macros it's easy to do a really complex transform of a line and then repeat it for every line.

Or the reverse, someone wants a report from some database in some specific format for whatever reason. It's often easier to do a simply query and vim the output rather than use the string functions of the DB.

yes, you are missing out, but not because you HAVE to use Vim, but just because Vims approach is quite different it's worth learning so your mind at leat knows about this whole other approach to editing.

I find Vim plugins in most editors are quite good, a true vimer probably doesn't like them as much because they are a blend (but then, they may as well just use Vim itsefl), and for jetbrains stuff, it blends the awesome of vim with the awesome of jetbrains code manipulations.

Macros can be really powerful. Takes a while to set it up, but well worth.

My last macro took about 12 seconds. Really complicated ones can take a while if you have very conceptually-heterogenous text structures, but it's literally "start macro + do stuff + finsh macro" -> use macro.

Some more I forgot about:



Apt get for Windows, although I would argue its actually better because there's just one central repo, so you don't have to add some long repo before you can install Telegram




The fastest way to test APIs - create GET and POST requests easily and view the results any way you like.

Most of the tools I've mentioned in this and my other comment are listed on Scott Hansleman's "Ultimate Developer and Power Users Tool List for Windows" - (https://www.hanselman.com/blog/ScottHanselmans2014UltimateDe...) which is probably the most definitive list out there for this kind of stuff (for windows). It's not been updated since 2014, so there are a few newer alternatives - in particular Foxit reader is kind of a mess these days, there are far better alternatives.

Be sure to check out the comments for lots of extra tools in the format "I can't believe you didn't mention {x}, I can't live without it"

Thanks for your shares in this thread, Postman and Hanselman's list, which I had seen a few years earlier, but not checked recently.

In what way is Foxit reader a mess, and what are those alternatives?

I like Sumatra PDF, very lightweight.


One thing I like about SumatraPDF is that it saves the position within a file, so if I open a file again, say, after a reboot, I am back at the same position.

It feels bloated in size, features and UI. Uses a big ribbon UI like office used to. Too much screen taken up by toolbars, status bars etc. Most recommendations you get for PDF viewers are for Foxit, Adobe, Sumatra or. CutePDF. I don't like any of them. I found Slim PDF, which is 5mb (!) installed versus Foxit at 150mb+. There's also Microsoft Reader which is a Windows 10 app, and has a beautifully minimal UI. A good UI just stays out of the way

Just tried out Slim PDF. Looks good, thanks. And yes, it is small and light. Like the minimalism too.

Everything - instant search in Windows https://www.voidtools.com/

I can't simply recommend "Everything" enough. Imagine the accuracy of `find` with the speed of `locate`.

My main PC has 8Tb of storage in 4 HDDs, with "Everything" I can find any file immediately.

Too bad Windows is the platform people love to mock. I hadn't been able to find a tool as efficient as Everything for Linux or OS X.

Xplorer2 (or another TC clone) allow me to filter my folders under a shortcut. I have a set of custom filters for my tasks that directly show the files I need. Combine it with the other nifty power user actions in those Explorer replacements and I easily save an couple of hours a week. The dreadful wait when you see your colleagues having 10 explorers open finding your file to have a quick peek. Additional bonus points for the custom user actions to fire up different cmd shells in ConEmu under a shortcut, open the proper editor, open in your favorite diff tool without having to select etc.

Resharper (is that a tool?). No need to explain I guess. You use Visual studio, you get resharper. Can't live without it anymore.

The ability to write bash/powershell/your favorite script language. Everything I do more than twice turns into a script and is added as user action in my xplorer2 under a shortcut.

ShareX gif capture. My company has email attachment max sizes too small to send proper mp4 longer than 1 min. The gif capture has lower quality (sometimes Horrible) but is good enough for sharing bugs which require investigation before they can be submitted. Instead of having to type I capture the gif and send to our test engineers to turn into proper PR/ticket. Saves time writing it all out and makes me actually share random issues found during doing other stuff.

P.s. Try irvanfiew on Windows beats the bulk resize thing in OP.

Intellij with PHP and Python plugins has saved me ~200 hours in 3 years (very conservatively assuming 15m a day 5 days a week 52 weeks a year).

I suspect in terms of lost time I didn't have to spend debugging and other stuff it's probably 3-4 times that minimum.

Definitely ack (https://beyondgrep.com/). It basically is grep but designed for source code. First off it can be downloaded as a single file (Perl script FTW!), so easy to deploy. Secondly, by default it automatically searches recursively, prints out the filename, and line numbers of matches. It also uses PCRE for its regexes which is really nice.

But by far the most time saving feature I have is the type searching functionality in it! For example, one can type 'ack --java System.out' and it will search ONLY the java files! No more 'find . -name *.java -exec grep -Hni System.out {} \;'. And you can add custom types easily through your .ackrc, it is a very well put together piece of software.

(PS: If you're trying it, definitely try out 'ack --bar')

If you mention ack, you gotta mention silver search (ag):


If you mention the silver searcher (ag), you gotta mention the platinum searcher (pt) :)


Since it's pure Go, there's your static binary which makes it easily deployable again.

If you mention ack, ag, and pt, you also have to mention ripgrep (rg) https://github.com/BurntSushi/ripgrep which has the usability features of the newer grep-alikes but is as fast or faster than gnu grep.

What is the difference between ack and ag? I use ag a lot, and it seems, on web, whenever ag is mentioned, ack is also somewhere in vicinity (or vice-versa).

ag is faster ack is simpler and more OG

You should try ripgrep (https://github.com/BurntSushi/ripgrep) which has the same functionality but with 2 orders of magnitude faster.

I used to use grep and ag quite often until I found ripgrep. It's really an incredible piece of software.

> No more 'find . -name * .java -exec grep -Hni System.out {} \;'

While ack is great, I just want to point at you could easily do 'grep -rHni System.out --include \*.java'. Granted it's still more words, but the '--include' directive (which you can specify multiple of) is really great for quickly modifying a grep search to restrict to certain filetypes. No need for find/exec :)

(PS: and also 'ack --cathy')!

This freeware (for personal use) saved me dozens of hours renaming files: http://www.bulkrenameutility.co.uk/Screenshots.php

The OS X* equivalent is A Better Finder Rename: http://www.publicspace.net/ABetterFinderRename/

When installed, you can use the app or call it as a service.

Not sure if it's as full-featured as Bulk Rename Utility, but it's close. [regex replace is hidden under Advanced & Special, which will tell you a bit about their target market.]

* Available for OS X 10.[12]-11, and macOS 10.12

I've used this before too, long ago; and it saved me a boat load of time!

* 'set -o vi' in bash lets me search my command history with the same keys as moving around vi

* git + sql scripts

* simplenote (notes across devices) Semi-automation of pieces of my job captured in these notes.

* zapier if I'm trying to connect two web apis.

>'set -o vi' in bash lets me search my command history with the same keys as moving around vi


I love 'set -o vi' and have been using it for ages, since ksh in fact. It makes editing commands and then re-issuing them so fast in Unix, if you're even just okay at vi - don't even need to know vim. Used to immediately put it into the .kshrc on any new Unix system I started working on (along with some other commonly used productivity settings, of course).

A fact that some might not know about 'set -o vi' is that once it is set (a one-time task per login shell, or put it in your login shell's profile file, then it is one-time, period), and you are now editing a command at the command prompt (using vi features), you can just press v to open up vi with a temp file containing that command, edit it using the full-screen view and power of vi, then save the file and it runs automatically.

This is particularly useful for multi-line shell commands like a for or while loop, or a long command pipeline.

And while in that vi session, if you decide you do not want to run the command right now, you can always save it to a different, persistent file name (with :w filename), and exit the temp file without saving it.

Then the command will not run, but you've saved it in a permanent file, and can do anything with it, now or later.This is useful if you realize in the middle of editing it, that you want to look up some command syntax (to add to the file), or polish the code some, etc., but maybe not right now, because you want to finish your current task first (which the original command sequence was a part of). You can just edit that permanent file at your leisure later, polish and test it, then use it regularly as needed.

Edited for formatting and grammar.

> you can just press v to open up vi with a temp file containing that command, edit it using the full-screen view and power of vi, then save the file and it runs automatically.

FWIW you can do this with ctrl-x ctrl-e in normal (emacs-like mode) and it will open up your $EDITOR. Example:

    > EDITOR=nano # or EDITOR=vi, EDITOR=emacs, etc
    > echo [ctrl-x ctrl-e]
    * edit command by adding foo and save*
    > echo foo

Thanks, good to know. I don't use emacs, but thinking of trying it out after these years of not using it. In the initial years I did not have access to it on the Unix machines I worked on, only had vi, and later I read that it needed a lot more keystrokes than vi, also the high use of Ctrl key, which is in an awkward position (though I know it can be swapped with say Caps Lock through software). That may be why I did not get into using it, though I have read that it is extremely powerful because it is programmable in Elisp. (I've read Steven Levy's book and some other stuff about GNU, rms, etc., including the book Free as in Freedom about him and GNU. Good read.) I love programmable tools and prefer them to the non-programmable kind, but have kind of made an exception of sorts in the case of editors - I know vim too is programmable but having looked briefly at its syntax, don't like it much. So I guess I may get into emacs after all, but somewhat slowly.

awesome tip! thanks!

In bash:

$_ in a command gets substituted by the last argument of the last command you typed - happens to be useful surprisingly often. Simple example: copy a file to some other directory, then cd or pushd to that directory:

$ cp some_file /long/path/to/directory

# cd $_

But there are many other cases where I regularly find it useful.

_ also evaluates to result of the last expression in the Python interactive shell, and probably in IPython (command-line version) as well - need to confirm it about IPython.

Edited for formatting of commands above.

Another useful bash thing: !! (which substitutes the last command you entered)

cat /really/long/path/to/something/only/readable/by/root

cat: Permission denied (goddamnit)

sudo !!

Oh yeah, I sudo bang-bang all the time (it's in other shells, too, zsh for example). I am also fond of !$, which is the object of your last command - lots of times I will ls a file to see the timestamp or confirm it's there; and then want to open it / edit it / rm it, which just looks like

$ ls /long/path/to/file.ext

$ less !$

What if you need to run a third version of the command, or a version that involves deletions, e.g. "less /really/long/path/to/..."?

  bash-3.2$ less /var/log/authd.log.0.gz 
  "/var/log/authd.log.0.gz" may be a binary file.  See it anyway? 
I type: control-P control-A meta-D zcat control-E | SPACE less

  bash-3.2$ zcat /var/log/authd.log.0.gz | less
On the OS X terminal, you have to configure it to treat option as meta before the meta-D above will work. I forget whether common Linux terminals require such configuration. (Edit: zless would work just as well here, and on Linux "less" appears to basically have the functionality of zless.)

With set -o vi:

    <ESC> k cw zcat <ESC> A <SPACE> | <SPACE> less <CR>
Very natural for any Vi user: go up a "line", change the first word, then append.

Yes, that's a good one :)

There are zillions more such that can be done in Unix, of course, if one puts one's mind to it and experiments, even without spending a lot of time. Another simple one I use some is to define some environment variables to the value of long directory names to which I need to go often as part of some project. Then I can just do:

cd $short_name_for_that_dir

A shell alias can also be used instead.

Another one I love is that both bash & zsh can edit the current command in $EDITOR. In bash, it's ^X^E by default, and in zsh, I think you have to bind it. (it's edit-command-line, which I strangely can't find in man zshzle.)

Not the original poster, but the _ trick works in ipython as well.

Cool, thanks for confirming.

Instead of $_ you may find ESC+. easier. I have.

Cool, didn't know of that one.

I would love to have a self-hosted version of Zapier. I am not really comfortable with a third party having that much access to all my web services. However, all their supported services together with a few lines of Python is really powerful.

There are a number of competitors, some self-hosted.


Huginn is basic but very configurable for this

For me it is i3wm - an amazing window manager. After using it for a few days you will never want to go back to using standard mac/linux wm, it is incredibly fast and convenient.

Another one is, of course, Emacs. By far the most useful and brilliant tool in my toolbox.

After wasting my life in IDEs like Borland , VS, Eclipse , NetBeans ( which I still consider ok ) I broke my resistance and moved to emacs, i3wm, and command line tools. I have become extremely productive last 5 years.

When I see such productive articles coming I can't keep but smiling and thinking ( but you could to this or that easily using wdired or macros or...)

Over the years I've found I've saved the most time less from apps I've found but more the ability to write off-the-cuff single-use Ruby scripts. Whether scraping, renaming, fetching, etc, it can be a huge effort saver.

Same, except


(One of the guys I work with does this with Java - which always seems an odd choice to me. The best sysadmin I've ever worked with did everything in bash, which feels odd to me for opposite reasons (ever seen anyone doing SQL database connections in shell???). But the underlying message holds: "learn and use a programming language, it pretty much doesn't matter which one, whatever you're most comfortable with - and then use it when the simple tools don't work for you...")

I was a Perl developer for 8 years before Ruby and I think it's where I got the habit from - it's a very 'Perl thing' to do! :-)

Java does seem an odd choice. I don't disrespect Java, but for very quickly loading in data, making changes, etc, it's going to be far from a few lines I suspect..

I wonder if people still voluntarily move from Perl to Ruby, other than for financial reasons. I had to dive into Python and Ruby for my dissertation. I didn't like Python, but I completely understood how it would be possible to. Ruby just felt like a less predictable Perl with more immature tooling.

I haven't used Ruby in years, but I used it a lot for toy projects at home, and I recently looked at a few of these, and I was kind of impressed - even after years, and almost without any comments, the code was still very readable and rather beautiful. Ruby can be very elegant and pleasant to use; I have never used it for web applications, though.

(At work, I use Perl quite a lot, though. I recently have started trying to use Go for problems I normally use Perl for. But the jury is still out on how well that works.)

re: entry #1, if you are at all proficient with a unix/linux/bsd command line, learn to use the command line tools for the ImageMagick library. There are a great many things you can accomplish with the "mogrify" CLI tool to batch manipulate images, and integrate it into workflow/shell scripts.

One of the problems with ImageMagick especially, but complex CLI interfaces in general, is that the discoverability of features is limited by the separation between parameter-specification and documentation. No matter how streamlined or thorough (and there's often a tradeoff between the two!) your man page is, the documentation on a flag won't be at the same place that you type the flag, and unless you can speed-read a list of flags, you aren't hand-held towards the most common options. And if it's something you use in infrequent time-crunches (as with OP), then you won't commit the flags to muscle memory.

Of course, the tradeoff is that you can't easily script this way. If the software gives you complicated pipelines and allows you to save them, though, sometimes you don't need to. I'd love to see more GUIs that, instead of (or in addition to) doing processing internally with calls to the low-level library, allowed you to copy out a command line version of the command which you could further customize for scripting. But it's rare to see that kind of feature.

Just fyi


And at least last time I used it you could record an action and convert to script.

Some top rated commands on 'commandlinefu.com' has saved many hours. The key sequence to kill a hung ssh connection has been a life saver.

On the Mac, I think the combination of capability and customizability makes Keyboard Maestro well worth buying for any advanced user. Or, as the software overview describes it, "The only limit to Keyboard Maestro is your imagination!"

And while I have no idea how the information is calculated, the About dialog says the software has saved me 6 months (~4,000 hours). It may well be more than that since I've used KM for about 10 years now and that information is a recent addition.

Keyboard Maestro is indeed the best tool in its class, but I think the motto should actually be:

"The only limit to Keyboard Maestro is your ability to tolerate the disastrous user interface"

I've worked with the developer in the past on some aspects of the app, and he's responsive and open-minded. I wouldn't say the interface is disastrous—more like it's not well-suited to helping the user put its capabilities to use. Whether my assessment or yours is more accurate, the bottom line is that the interface has ample room for improvement. My problem, though, is that I've never conceived of a solution that's so much better it's worth sharing with the developer because 1) users will benefit, and 2) it's an interface that can accommodate the app's evolving capabilities through at least a few major revisions.

Apart from Emacs and Unix in general, lately I've come to depend on some simple readline shortcuts in bash:

$ cat ~/.inputrc "\e\C-k": shell-kill-word "\e\C-a": "awk 'BEGIN{OFS=FS=\"\\t\"; } { }'\C-b\C-b\C-b" "\e\C-w": "\C-w\C-y > /tmp/temporary-inputrc && mv -f /tmp/temporary-inputrc \C-y" "\e\C-f": "find . -type f -print0|while read -rd '' f; do ; done\e-b\C-b\C-b\C-b\C-b\C-b\C-b" "\e\C-i": "\C-awhile true; do ( \C-e ); inotifywait -q -e modify -e close_write ; done\e51\C-b" "\e[A": history-search-backward "\e[B": history-search-forward "\ep": history-search-backward "\en": history-search-forward

So if I've typed "make -j && grep foo|sed 's,b,x,g'|./run", and then hit ctrl+alt+i, that'll turn into

while true; do ( make -j && grep foo|sed 's,b,x,g'|./run ); inotifywait -q -e modify -e close_write ; done

meaning whenever I save a file in that directory, the command runs again.

Looks like some of your text got a little mutilated by the pseudo-markdown parser; perhaps copying that into a gist or paste would be better? I'm really curious what some of what you've copied does.

Sorry, that was quite unreadable! My full inputrc is at https://gist.github.com/unhammer/130df5f3ebf61724d36f60c4bb1...

xargs is a remarkable time saver. jq for JSON parsing and selecting at the CLI. ^X^E at the shell to edit command in $EDITOR. for loops at the shell. until at the shell. atd for "run this in two hours".

IntelliJ for everything. Extract methods and interfaces auto-modifying to support close duplicates. Signature changes. That sort of thing.

Rsync. Zsync allowed me to even get Linux back in the day (thank you Ubuntu).

When working with Excel, there is an addon (paid) called KUTOOLS that saved me a lot of time. It has a bunch of preset operations like removing duplicates, merging multiple sheets, bulking changing rows etc.

Another text editor worth mentioning is EMEDITOR that has tons of macros and handles larger files much better than Sublime

    > What are your 100 hour time savers?


  function ff() { find ${2:-.} -name "*$1*" ; }
  function gg() { grep -n $* ; }
  function ggr() { grep -rn --exclude-dir=.git $1 ${2:-.} ; }
  function ggi() { grep -in $* ; }
  function ggri() { grep -rin --exclude-dir=.git $* ${2:-.} ; }
Oh, and that one:

  # display a man page as pdf window
  function pman() {
    local pdf="/tmp/pman-$RANDOM.pdf" \
        (man -t $* | ps2pdf - >$pdf && mupdf -r 96 $pdf 2>/dev/null )&

Speaking of man pages, this script is tiny but useful (IMO):

m, a Unix shell utility to save cleaned-up man pages as text:


Beyond Compare


Especially with integrated source control and remote sessions to manage source on multiple server types and platforms. Very customizable rules for compares and folder structure / filename case etc.

I used to use Araxis Merge, but Beyond Compare, with a tiny bit of customization, is in my opinion a far superior tool.

My current faves, far from exhaustive - tac: for very large log files - emacs: remote file editing, regex replace across matching files, macros, rectangle cut/paste, ... - xargs -P <n>: use all the cores! for fanning work out across a pool of <n> processes; web scraping, processing lots of files at once - docker: fast, precise and resusable image building

For me, it'd have to be the fish shell, fishmarks (a directory bookmarks plugin I wrote), and vim/neovim.

After spending last two days researching, followed by an hour of programming, this one can hopefully save you some time: automatize web browser tasks with imacros [1]. I use it to feed CSV data to database.

I tested multiple other options like Custom Style Script [2] and others but while being ok during initial tests, for some reason nothing worked on my typo3, js based backend website.

[1] https://addons.mozilla.org/en-US/firefox/addon/imacros-for-f... [2] https://addons.mozilla.org/nn-NO/firefox/addon/custom-style-...

Are there other sites with these kind of automating tips and tricks? I would love to automate more in my life.

Lifehacker runs such articles somewhat regularly, I think. I don't read that site often, mostly when I come across such a tool published on it, that I get to hear about on some other site. But I've found some of the posts there to be good.

Interestingly, IIRC, Gina Trapani, who I think was/is an editor or writer there, had herself written a utility for to-do lists called todo.txt or some such name. It was supposed to be lightweight and only need and use text files, so could be used almost anywhere.

I don't know, it took me a while to find all of these. If you have android, i'd recommend tasker, or automator for mac. Both can automate pretty simple tasks.

Btw, in the comment section of the post, people are posting really good tools to automate things. :)

Windows here. * SharpKeys - swap capslock and escape. * Keepass - password manager. * AutoHotkey - a lot of stuff, ex. Escape + scroll truns volume up or dow, Alt + x closes windows * Wox - launcher with plugins * Everything - fast file search by name * Tablacus - file explorer with plugins

Greenshot - http://getgreenshot.org/

This is THE best screenshot tool on Windows, I've installed it on every machine I use. It also has an editor where you add text, highlight, circle things, etc.

Just tried it. It has fired my old capture tool, thanks!

You can store screenshots in a custom folder like this:

    # Store screenshots in ~/temp/screenshots
    mkdir -p ~/temp/screenshots
    defaults write com.apple.screencapture location ~/temp/screenshots

autohotkey ... task specific ad hoc scripting

* mouse moves

* keyboard input

* raising lowering windows

* one keypress -> multiple effects

create an edit -> launch -> debug tight loop ? no problem

dismiss annoying Lotus Notes dialog warning box -> no problem

enter 5000 multi tabbed entries from a CSV into a custom gui -> no problem !!

If you like autohotkey or autoit, I highly suggest trying out sikuli sometime. I have been able to do some pretty cool things with it due to the image recognition engine.

FastStone Photo Resizer is an excellent batch image resizer/watermarker for Windows that is free http://faststone.org/FSResizerDetail.htm .No affiliation.

Phatch on Linux can do a lot (all?) of what PhotoBulk does.

(Obv. ignoring more command line oriented tools.)

The macro recorder in Notepad++ has saved me tons of time processing or parsing files.



Saved me 100+ hours _this week_ :-). Thanks, Tony Finch &co!

Apart from many unix utilities (grep, awk, vim) etc, tmux has definitely changed the way i work. Simple idea of session -> window -> pane takes away all the friction of switching between tabs and windows.

Writing bash scripts and backing them up to github saves me tons of time.

You should check this thing out, then: https://github.com/erichs/composure

I make a habit of writing any sequence of two or more commands as a shell file, no matter how simple. This way if I need to reference it later I don't need to grep through my history on every box.

In the opposite vein, it can be helpful to stick something like ` #useful` or ` # description ` on the end of a command you expect to need again, so you can easily grab from history with C-r

Excellent idea, going to use it :)

For benefit of others - this approach would work with the 'set -o vi' option too (discussed by me and others elsewhere in this thread) - just that you would have to type a / (slash) to search instead of C-r (Ctrl-r), because the mode is vi, not emacs. And since you start (obviously) at the bottom of the history (i.e. latest command first), the usual sense of the vi / command (which means search forward or down in the edited file) is reversed here to mean search backward in the history of commands. And ? is the opposite of / . N and n work, too, to repeat the previous search command in the opposite / same direction.

Yep. Super useful. I figured it out only recently! I ssh into many servers with IP addresses and weird hostnames. Now I add #server-description at the end of the command and I can get back to it instantly.

I prefer to use a short memorable name as an alias in `~/.ssh/config`, so with keys set up properly ànd a suitable alias in .zshrc I can connect with 4-6 keystrokes, and take full advantage of zsh's auto-completion across hosts (way better than in bash).

I still have no easy solution to the "same box" changing IP's because you booted up a new test cluster. I guess that's the point of DNS.

You might like this, then: https://github.com/erichs/composure

Similarly, adding a setup.sh for clean installs doesn't hurt.

Napkin . I use it every day many times a day to make screen shots with quick annotations


Does anyone know if something like Hazel exists for Linux?

I know I could basically write scripts and start them with SystemD timers/cron but.. not sure it'd be worth the effort doing it all myself.

Gnu Parallel ?

BBEdit (barebones.com) on macOS.

- Perl regexp's

- Worksheets for thing you'd want to remember

- Process Duplicates Lines / Process Lines Containing

- Nice set of Markup tools

- Diff

- Support all kinds of versioning systems

Araxis Merge (x-platform) for text, image, folder diffs.

You can bulk resize in Photoshop, just FYI.

I know, you could also create a macro to watermark every picture, or create a special export setting. But I wanted to give tips that didn't require a lot of technical expertise, so that it is available for everyone. Also, getting photobulk ($10 once) vs getting photoshop ($10 a month) is cheaper if you don't have photoshop.

Or with ImageMagick!

FME. Simply no better tool for slicing and dicing all manner of spatial data.

how is photobulk better than irfanview?

also for renaming i find built in TCMD multi rename tool quite powerful

Whoa, when I click on this link, it automatically closes the tab it resides in!

At first I thought this was an elaborate meta joke since not reading articles would save me thousands of hours, but I think it is a bug. I am using the latest google chrome with a very restrictive uMatrix that blocks pretty much all cross-site requests that are not images, and Ad-Block.

Weird, I'm using posthaven.com it is pretty useful as a blogging tool. Check if you can open it :)

I tested some more. It closes/crashes the tab when I do not allow requests from facebook.com in frames in uMatrix. I didn't investigate further though, I'm tired from a long and busy day.

Posthaven.com works without problems.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact