Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Which tools have made you a much better programmer?
385 points by karamazov 3 months ago | hide | past | favorite | 514 comments
Getting better at coding is usually a long, slow process of study and practice. However, sometimes I run into something that's easy to understand and, once I'm using it, feels like I've leveled up.

A few personal examples are: * version control - specifically, reading up on git and understanding the more complex commands * debuggers * flame graphs for performance debugging * good code search

What have you added to your workflow that's made you much more productive?




- GNU/Linux + i3wm; complete control over my programming environment.

- bash + GNU coreutils; seriously, take the time to be able to write helpful bash scripts which can run basically anywhere.

- git; use it even when you're not pushing to a remote. Add helpful aliases for everyday commands. Build a good mental model of commits and branches to help you through tough merges. ( my ~/.gitconfig: https://gist.github.com/jeaye/950300ff8120950814879a46b796b3... )

- Regex; combined with bash and your GNU tools, a lot can be done.

- Vim; modal editing and vi-like navigation can blow open your mind. Explore the existing plugins to accomplish everything you want to be able to do. It's all there. ( my ~/.vimrc: https://github.com/jeaye/vimrc )

- Functional programming; if you're new to this, start with Clojure, not one of the less-practical languages. This has made such a huge impact on the way I think about code, write code, and design code that it's possibly the biggest one here for me.


An excellent list. Regarding functional programming, I recommend starting with a gentle approach that doesn't require picking up a new language:

1. Stop creating counters/loops and become facile with map, reduce, and the like. This will shift your thinking away from blocks and toward functions.

2. Take the time to really understand what side effects are, and start avoiding them everywhere they are not necessary. Keep scopes as local as is practical.

3. When you start toying with functional programming per se, make sure you really have your head around recursion. That's where much of the magic concision comes from.


Agreed on gentle approach. I've started using libraries like Ramda for JS, which is especially designed to ease people into functional patterns.

I then moved to FP-TS, which makes more heavy use of haskell-like patterns and monads.

The hard part isn't the syntax of whatever language, but understanding the new patterns and way of thinking, which you can do with a simulation layer like FP-TS (for typescript/javascript).

The functional patterns and emphasis on types makes your code robust and more correct. It emphasizes correctness, which is ultimately what your job is as a programmer. Optimization comes after.


I too have started down the fp router with ramda and ramda-adjunct.

Though other people at my work dislike it due to them not understanding what's going on


There could be other reasons for disliking it other than not understanding what’s going on. Maybe other people are working on tight deadlines and don’t have the time or mental energy trying to understand an entirely unfamiliar programming paradigm.

I would want to program in functional too, but I would seek out projects or teams that already use functional. I’d never introduce a functional language to an already established team or project, unless of course, I was the CTO and there were clear benefits.


If you practice domain driven design, you can always start out isolated projects or even just new modules if your code is sufficiently modularized. I started out by making a new library/module with Ramda, and then it's consumed via a function call that anyone programming in any style can use.

Agreed that functional patterns can be hard to understand if you don't know the patterns, but if you do know them they are much easier to understand and reason about the code. It's a long term investment, and one that frankly I believe will be inevitable as more and more people start doing programming work.


Had the same result, and had to settle for the middle ground of (lodash | underscore; I don't recall which now.)

There's a lot of resistance to adopting anything with the name functional, and that resistance is often seen in a /refusal/ to try to understand instead of a mere lack of understanding: people put up walls straightaway. I expect many need motivating examples to guide them to it.

Languages like Kotlin are pointing the way towards that middle ground far more effectively than, say, Scala did.


> I recommend starting with a gentle approach that doesn't require picking up a new language

Disagree. This is likely to 'dilute' the lessons of functional programming, as it were. If you learn to program in idiomatic Clojure/OCaml/Haskell/Scheme, you can be relatively sure you really have picked up the principles of functional programming.

If you merely attempt to apply new principles in an environment you're already comfortable in, things aren't going to 'click' in the same way.

Beside that, plenty of languages simply lack the facilities needed to effectively use the concepts of FP, as vmchale says.


I can confirm this. Although I knew the principles of FP, concepts didn't click until I started using a functional language.

In non-FP languages, I didn't originally appreciate the benefits of the pattern. It was more work to do things functionally, so I dismissed some patterns that were actually useful.

I'm a bit biased, but would recommend Elixir as an accessible FP language. It has an accessible syntax and modern tooling.

You may be frustrated with the constraints of immutability for a few weeks, but the benefits become apparent once you're used to it.

Now when I work in non-FP languages like JavaScript, I will apply FP principles when it makes sense.


Some people say learning Latin makes you a better writer, smarter, etc. even if you're unlikely to directly use it. Dubious claims but it feels like FP can be like that.


The thing about learning Latin (not that I am great at it), at least for an English native speaker, and the epiphany English language speakers have, is the realization that language can have structure and can be discover-able, if I know a root word then I can almost be sure that I know the meaning of con, re, dom etc. There is just no equivalent to that in English other than the Latin origin words we borrowed, because we just assimilate any words we like and make up new ones as we see fit. An example would be beaucoup a Vietnamese word but most US English speakers would know it means big. Without the historical reference of a movie you would not have a reason or rhyme why an Asian origin word made it into English. There is literally no reason or rhyme to most of the spoken language and I think that is the epiphany, that some people actually thought out a logical way to create a language and via that logic it is discover-able.


Isn't beaucoup French?



Yes it is - France has colonised part of vietnam


yes, and as Army slang, it made it into American English through Vietnamese


Very cool. Did not know this. Will have to tell my Vietnamese wife. We were trying to figure out if English had any Vietnamese words. I think this counts.


I'm pretty sure in Louisiana it was already in slang through French here


Hundreds of years of French culture through Louisiana.

Other unique French US cultures:

Haitian Creole

New England French were are Canadian migrants.

Missouri French

Muskrat French from Michigan

North Dakoka Metis French

It could have come to some from 40 years old but it was already here for many.

Still it's a French word. It is like going to Germany and learning an English word but calling it German.


Sure. When learning Latin, well, you learn to read and write Latin. You absorb the language's principles by learning the language, not by trying to highlight them in a language you already know. That can only give a much shallower appreciation.


Not quite true - many complexities of grammar are shared between the languages and it is often useful to structure the learning of Latin around the English patterns of grammar.


I imagine that diligently learning a foreign language (dead or otherwise) will make you smarter, especially if the free time spent on it would otherwise be spent on less academic pursuits.

(edit: spelling)


For me I get the most benefits from using FP features. I can write one-liners that are easy to read, and replace 10's to 100's of lines of my code.

For a simple example, val newList list.map(function Call(_))

Instead of val1 = functionCall(1) valN = functionCall(N)


A side note: "Dubious" comes from Latin, sharing the root of "duo", which means two,in this case referring to the possible indecision between two things or ideas.


Even in languages where the concepts can be encoded, it can be hard to determine what aspects of a given library are the encoding and which parts are the fundamental ideas if you haven't seen the ideas used in well-suited language. For instance, I didn't really understand the use of functools.reduce[0] or itertools.starmap[1] in Python until I was familiar with zipWith[2] and foldl [3] in Haskell.

The ideas themselves are not particularly complicated, but I hadn't previously worked with abstractions where the default was to operate on whole data structures rather than on individual elements, so I didn't see how you would set up your program to make those functions useful. In addition, for abstract higher-order functions, type signatures help a lot for understanding how the function operates. I found `functools.reduce(function, iterable, initializer)` significantly more opaque than `foldl :: (b -> a -> b) -> b -> [a] -> b` because the type signature makes it clear what sort of functions are suitable for use as the first argument.

It's now easy for me to use the same abstractions in any language that provides it because I only have to learn the particular encoding of this very general idea. While I couldn't figure out why functools.reduce was useful or desirable, I couldn't figure out many parts of C++'s standard template library at all. But if you already know the core concepts and the general way that C++ uses iterators and the fact that functools.reduce, Data.Foldable.foldl, and std::accumulate[4] are all basically doing the same thing for the same reasons is a lot more readily apparent.

[0] https://docs.python.org/3/library/functools.html#functools.r...

[1] https://docs.python.org/3/library/itertools.html#itertools.s...

[2] https://hackage.haskell.org/package/base-4.14.0.0/docs/Data-...

[3] https://hackage.haskell.org/package/base-4.14.0.0/docs/Data-...

[4] https://en.cppreference.com/w/cpp/algorithm/accumulate


> it can be hard to determine what aspects of a given library are the encoding and which parts are the fundamental ideas if you haven't seen the ideas used in well-suited language

That's a good point. Using a proper functional programming language doesn't just enable FP ideas (you can't fake a feature like implicit capture of variables), it may also clarify them by reducing baggage.

> I found `functools.reduce(function, iterable, initializer)` significantly more opaque than `foldl :: (b -> a -> b) -> b -> [a] -> b` because the type signature makes it clear what sort of functions are suitable for use as the first argument.

I suspect you're just a better Haskell programmer than me (I've only ever dabbled), but I find the big-mess-of-arrows syntax to be pretty confusing compared to a simple tuple of descriptively named identifiers.

Perhaps related to this: I don't see the practical appeal of currying. Even C++ supports the 'bind' pattern just fine - http://www.cplusplus.com/reference/functional/bind/#example


A gentle way to get into the "functional" mindset is to write small a script and then use it to process some collection with xargs.

xargs is analogous to map() in this situation, and the script needs to have limited side effects to work well with concurrency. xargs -P4 for example.


Functional programming has been a game changer for me as well and has enabled me to write larger and more complex programs that are easy to maintain and reason around. I highly recommend cytoolz for python


> An excellent list. Regarding functional programming, I recommend starting with a gentle approach that doesn't require picking up a new language:

But at least make sure that your language supports closures.


This! What is funny that is that I started doing this before I knew anything at all about functional programming, I just started to avoid stuff that I had painful experiences with.

Later I read a couple of chapter of SICP and then I really changed and my programming hasn't been the same since. The language I use at work is JavaScript and while SICP isn't for JavaScript, nothing else has changed my JavaScript for the better to that degree.


> while SICP isn't for JavaScript

There's a port. https://sicp.comp.nus.edu.sg/


> 1. Stop creating counters/loops and become facile with map, reduce, and the like. This will shift your thinking away from blocks and toward functions.

I am not very comfortable with this. How can I learn to do this in traditionally non-FP languages like Java? (Am CS undergrad student)


Caveat: I haven't touched Java in years, and that was not even a current version of Java at the time (well, it was old code made to run on the then-current JVM, but not utilizing any features introduced after 2006 or so). I'm assuming these are good resources, but I'm not sure.

https://developer.ibm.com/technologies/java/series/java-8-id...

List of articles relating to idiomatic Java 8 code. Some of these touch on using lambdas and functional idioms.

https://developer.ibm.com/articles/j-java8idioms3/

This one shows a few of the functional-styled methods that can be used (foreach, takewhile, iterate, etc.).

https://developer.ibm.com/articles/j-java8idioms2/

Shows the collection pipeline pattern.

I have experience with the same things in C# and other languages, the way they're using them in these articles are what I'd expect from a comparable API.


I can't speak authoritatively about Java, but it looks like map-reduce is available in Java 8 by casting a collection to a stream [0]. Considering the definitions of map and reduce can help one see how they can replace loops/counters:

MAP: Take a collection, say a list/array or a dictionary/hash, and perform some function on each member of the collection, returning a new collection who's members are the return values for each original member. It's a loop, but no loop!

REDUCE: Do the same thing as map, but carry along an output variable, and have your function's output for each member (potentially) mutate that output variable. Summing is a basic example.

I'm not specifically recommending preferring this in Java as a step towards functional programming. It's in, uh, more terse languages like Python and Ruby where the payoff is obvious [1][2]. And among not-functional programming languages, it's not just dynamic languages, either. Consider Dart (and seriously, consider Dart) [3]. Also, Javascript, which has had many features shoehorned-in over the years, has these and related functions.

[0] https://www.java67.com/2016/09/map-reduce-example-java8.html

[1] Double some numbers Python: result = map(lambda x: x + x, array_of_numbers)

[2] In Ruby: result = array_of_numbers.map{|x| x + x}

[3] In Dart: result = arrayOfNumbers.map((x) => x + x).toList();


One other other thing. Functional thinking has greatly changed the landscape of client-server applications that are hosted in the cloud as well. If your aim is apps, maybe don't bother to master the skills needed to set up and maintain a Linux server (although if you follow OP's other suggestions, you're well on your way). Instead, consider your backend as a network of microservices, functions, that each do one thing and do it with side effects only when necessary. The host for your app? Poof! That's AWS/GCP/Azure's problem.


One other thing. You will be thinking functions first if you get into data science, say with Python/Pandas. In general, Pandas functions are vectorized, meaning that they operate on members of a collection in parallel. You really don't want to write a loop that iterates over some 5,000,000 member collection and applies some expensive function serially.


Thanks.

My school (Macalester College) recently started introducing some fp constructs/concepts in our intro class such as map, reduce like you mentioned.

It was long after I took it and now I am TA-ing. Oddly enough, I am more comfortable approaching this style in Kotlin.


> I recommend starting with a gentle approach that doesn't require picking up a new language:

But then you don't get any of the newer stuff.


I upvoted the parent and want to emphasize vim key bindings. This is not necessarily vim the editor, it's your editor of choice in vim mode. Learning to use vim is like learning to touch type: it's initially a pain, but it's hard to ever go back once you've mastered the basics.

If you haven't learned to touch type (it happens, I didn't learn until I was 22), then first learn that, then learn vim.

FYI: Remap your capslock key to escape to use Vim more effectively.


I recently had a revelation: When typing longer shell commands it can be time-consuming to go back and make changes, turns out you can use Vim style cursor movements within the Fish:

https://stackoverflow.com/questions/28444740/how-to-use-vi-m...


This was a revelation for me as well. I also didn't realize that if you haven't configured vi keybindings, the default is Emacs (in bash or anything using readline). Even though Vim's my main editor, I found modal editing a bit too heavyweight on the commandline, so I prefer the default Emacs (most useful by far: C-b to go back one word and C-k to delete everything right of cursor).


I prefer the emacs bindings for the command line such as C-A, C-U (mostly due to muscle memory), but have set up my $EDITOR as vim. This allows me to do C-X C-E, which opens the current command in vim to be edited.

If you are using zsh, you need to add this to ur .zshrc

  autoload -z edit-command-line
  zle -N edit-command-line
  bindkey "^X^E" edit-command-line


you can do that in bash too, and any cli that uses readline library


  set -o vi
Starts you in insert mode. To go back to the usual command-line editing,

  set -o emacs


also zsh (tho it doesn't use readline per se)


> FYI: Remap your capslock key to escape to use Vim more effectively.

That's more one way of doing things rather than a "FYI". Eg I switched capslock with ctrl. There are many ways to exit insert mode, I prefer:

  inoremap jk <esc>
  inoremap kj <esc>


I use that mapping all of the time, it is also helpful to use

inoremap <esc> <nop> as this helps you train your fingers to stop using <esc>


You don't need to know how to touchtype. Programmers are not a glorified typing pool. Furthermore, my father† has worked every job there is in a world class news organization since entering adulthood and never learned to touchtype. He could type 140 WPM.


Learning to touch type took roughly 10 hours over 2 weeks, and it made my life immeasurably better. Your (dad's?) mileage may vary.


What exactly improved?


Imagine having to look at your mouse every time you clicked on something. Now imagine you no longer have to do that.

If you're really curious, just learn to touch type and find out. It doesn't take all that long if you're already a solid typist. I'd be fascinated to read an article from someone who learned how to touch type and thought it was a waste of time.


OK so terminology: touchtype to me means that you learn the fingering the keyboard in a specific way. Not just that you can type by not looking at the keyboard. I can do that. I just never bothered with the "correct fingering".


For me it improved my physical comfort significantly. In particular it solved my back pain because I didn't spend so much time with my head looking down.


One other method to quickly exit insert mode if you can't remap keys is ^C. I did not know this even though I used gvim on Windows computers not under my control for years.


True and handy. However, Esc and ^C don't have exactly the same behavior -- from the documentation:

<Esc> or CTRL-[ End insert or Replace mode, go back to Normal mode. Finish abbreviation.

CTRL-C Quit insert mode, go back to Normal mode. Do not check for abbreviations. Does not trigger the InsertLeave autocommand event.


Re: i3wm

I installed it a couple years ago, went whole hog, down the rabbit hole, but realized a couple of things.

1. I rarely ever use anything more than a simple L/R split.

2. When I do use something more complex, it's almost always in the terminal, in which case why not use tmux?

These days I'm back to using gnome because ubuntu switched to it over unity (which had a weird multitouch bug that drove me crazy).

What do you get out of i3 beyond a simple L/R split supported by simpler wms and how often do you use it?


While I do use i3 for more than L/R splits with stacking etc. the largest benefit I get from it is workspace management.

I use i3 with polybar and have dedicated workspace icons (web browser, terminal, editor, to-do, email, music, etc) for quick navigation between different applications. Over time I’ve built up muscle memory (i.e $Mod+3 will bring me to my editor) that has significantly sped up my development process. While you could use another window manager for a similar purpose, I find the relatively minimalist approach of i3 + polybar in my case to be fast and highly configurable.


I can echo this! This is exactly how I feel too - the muscle memory around workspace management and the scratch windows (floating windows that you can toggle in/out of visibility based on a single keystroke) are the real boosters for me rather than splits. Splits are useful but the most common use I've seen myself do is to have a browser and a terminal / editor in splits.


> T he scratch windows (floating windows that you can toggle in/out of visibility based on a single keystroke

Us the toggle an i3 concept? I’m interested in it, can you give me the function name so I can look up doco? :)


Not GP but I think it's `scratchpad toggle` - scratchpad is the keyword to lookup anyway.


Thanks, interesting concept that has some valid applications :)


I've got a portrait monitor connected to a laptop, so I end up splitting the monitor's pane vertically.

I treat each workspace as dedicated for a specific purpose - Dev, Browsing, Chat, etc. That gives me quick mnemonics to hop to each space: MOD+1, MOD+2, MOD+3, ...

Within my Dev workspace, I use a tabbed pane for top-level organization: browsers (stacked), IDE, terminal, Emacs (magit + org), etc. This keeps my focus on that space when doing dev, and away from the laptop monitor, which is only occasionally useful as a reference.

I'll occasionally stack a terminal beneath my IDE if the current task requires it, e.g. to test a deployment or a project task.


Off the top of my head, things I use most beyond L/R split include:

1. Workspaces. Not at all unique to i3 but I’ve kept mine themed and automatically load certain apps into the same workspaces—all things i3 makes easy to do.

2. Floating scratchpad for media player. Nice to have my music controls always accessible but only visible when I unhide them.

3. Vertical split beneath my editor with a terminal. Just my personal preference, but I typically have L/R with code/browser and then split the code half vertically.

Being able to move windows across displays and workspaces quickly are other pluses, but again, not at all i3 exclusive.


I moved to awesomewm a few years back and I absolutely love it. I can open half a dozen terminals at once and they'll all automatically be ordered in a way that's immediately useful to me, where all the terminals are visible and (largely, depending on the automatic layout set) equally sized. And if I want another layout, I just press one combination to cycle through all the layouts I've configured. tmux doesn't give me that kind of flexibility. It doesn't feel anywhere as fluid or seamless to switch between half a dozen (or more) terminals.

It means I can have an overview of a bunch of different things and keep terminals context-specific (1 terminal for htop, 1 for docker, 1 for whatever remote test environment, 1 for project A, 1 for project B, 1 for some other remote host I need for some reason, etc.) If I want to do a new task unrelated to anything I'm doing before, I don't need to break the context of an existing terminal, I just press Alt+Enter, it's automatically slotted into a place where it's completely visible and usable and I can do that task quickly. When I'm done, I can close it, again, without disturbing the context of all the other terminals. It's just incredibly freeing to have that and I feel it frees a lot of cognitive load by being able to go back to a terminal for a certain task and immediately see exactly where I was and what I did last.

Also, much like the other comments, I use task-specific virtual desktops all the time. First desktop is for all the terminals. Second is for browser/communication. Third is for project A. Fourth is documentation related to projA. Fifth is projB. Sixth can be more documentation. I often have 10 virtual desktops for different things. I don't want to imagine what it'd look like if I had it all on one desktop.


Sorry to hijack your comment, but do you know of resources I can read to write my own custom layouts using awesomewm? I have a 21:9 monitor, and would like to write a layout where I can have a game running in 1080p + some windows on the side. Currently I do this just fine with floating windows, but surely there must be a way to use the tiling system.


Hah, it's probably possible, but awesomewm's documentation for this kind of thing isn't great. I've looked into it but was put off by the complexity since I don't have that much time to spend on that kind of thing. I did find a couple of links that might be helpful as a jumping off point?

https://stackoverflow.com/questions/5120399/setting-windows-...

https://stackoverflow.com/questions/45411844/before-diving-i...


I depart from that L/R split pretty regularly, especially when coding. I frequently have a single large window taking up half of my screen, and several smaller windows stacked vertically on the other half. The big window is usually the editor, but sometimes the browser if doing web development, or possibly a pdf or something like that. The little windows might be the editor, or a browser, or general command line work, or the stdout of a server or other daemon that I'm working with.

I sometimes do move to more of a tmux split workflow, especially if I'm working on remote machines, but it's just much nicer to have the same keyboard commands for all of my windows.


I always have at least 4 windows open in a project workspace, those being a text editor, a file browser, a terminal, and a web browser.

Currently I use i3 with packages pulled in from XFCE to handle sessions and power management, plus xfce4-appfinder and xfce4-panel (started and killed with MOD keys of course) because I wanted something beyond d-menu / b-menu.

It all works very well and was easy to configure.


Working on high resolution monitors allows me to do LLRR splits. Also the couple second I save from not moving my mouse I think is a worthwhile tradeoff. i3wm's floating mode doesn't really make me _lose_ anything, either. All upsides.


My typical setup is that I have a single desktop for each project I'm working in. Web projects split into 4 panes. Top left, browsers (different chrome profiles running in different i3 tabs), bottom left dev console (same), top right vscode, bottom right, i3 tabs for all the consoles I have open (`npm run`s, git, etc). For go projects I have full left split IDE, ssh in to vagrant top right, bottom right are local consoles (tabbed) for building/running tests/etc.

This would not be possibly if I wasn't using 4k monitors. That was a big shift for me, because now I think of each 4k monitor as 4 1080p displays.


I agree wholeheartedly. Especially when most of the work I do is on a remote server anyway, so I'm already in tmux. I still use i3wm on older laptops just for the battery life gains, but 95% of what I do is Firefox + Terminal emulator, and alt-tab is just as fast. My main workstation is just gnome and it's fine.

(Sidenote: is there any sort of linux libvte-based terminal emulator that has tmux integration a-la iTerm2? For when I do use i3, it would be really nice if I could spawn a new terminal on a remote server, attaching through an existing tmux session.)


Re. Sidenote: It doesn't look like there are many terminal emulators with tmux built in¹, but you can bind a keyboard shortcut to `xterm -e tmux attach'. (rxvt[-unicode] and st also support the -e flag, to run a given command instead of the default shell).

1: https://unix.stackexchange.com/questions/189805/what-termina...


That would work if I was running tmux locally, but what about when I’m running tmux on a remote server?


Maybe use `xterm -e remote-tmux' where `remote-tmux' is a small script in your $PATH similar to https://stackoverflow.com/questions/27613209/how-to-automati... ?


For a very long time I didn't use anything besides just plain vim, 2 biggest things to add to your vim use is undodir and YouCompleteMe. Crazy that I didn't have either of these for so long, undodir I wish was part of the default.


I made persistent_undo as part of Google Summer of Code in 2011. Very grateful I got to do that (and for the mentorship of Bram Moolenaar), and I'm so glad that this became part of your essential workflow. I also can't live without it at this point.


Thank you so much! It is very much appreciated and I don’t think I have any cases where I wish it was better. It works exactly like I expect and does what it says on the box.


Thank you for making it, I can't imagine my life without it.


I switched from vim (with YouCompleteMe) to VSCode about 4 years ago and I've recently discovered the intellisense engine for VSCode is now available as a vim plugin: https://github.com/neoclide/coc.nvim.


This makes me realize I really need to update my .vimrc and some plugins.

I've been toting around the same .vimrc for like 6 or 7 years and there are so many better plugins now.

Vim has been probably the most profitable tool I've ever picked up. Or maybe git. But I think Vim.


I've been using vim for 20+ years and have never been into extensive customizations. That was part of the attraction, because I had to deal with a lot of remote servers, often off-shore. With out-of-the box vi/vim I could get the most done with the least number of keystrokes. Someone made a comment above about IDE editors with vim emulation. Wish every IDE would do that. RStudio for example is not exactly vim, but I find it close enough. If only Spyder and Jupyter would that.


I've used these plugins to get vim bindings in Juypter Notebook / Lab.

Jupyter Notebook: https://github.com/lambdalisue/jupyter-vim-binding Jupyter Lab: https://github.com/jwkvam/jupyterlab-vim


Thank you sooo much!!!


To add to this, vim quickfix-reflector ( https://github.com/stefandtw/quickfix-reflector.vim) was life-changing for me.

It lets you modify and save code from the quickfix buffer, so when you search for something and it shows up in the qf, you can do a find replace / edit / etc. This is especially great for mass refactoring / renaming.


Woah, that's awesome. I use ^f for ack.vim, so combining that with quickfix-reflector sounds superb. Thank you!


By "undodir" are you referring to the "vim-undodir-tree" plugin? Because the "persistent_undo" feature that is built into Vim/Neovim (normally) is what I think of when I hear "undodir".


New(ISH) vim user here. I couldn't figure out how to install YouCompleteMe the other day, but I had not trouble with coc.


> Vim; modal editing and vi-like navigation can blow open your mind. Explore the existing plugins to accomplish everything you want to be able to do. It's all there.

The reason I end up ditching Vim after a few weeks every time I try it (4 serious attempts now) and go back to IntelliJ (which I’ve used for two decades) is that I never found a solution to the following trivial issue:

Imagine you have a large Java codebase and you want to refactor all occurrences of a method called “doFoo()” to give it a better name - how do you do this in Vim?

This is a single keypress in IntelliJ and I use this function very frequently but I never found a way to do it in Vim.

Note: I only want to change THIS doFoo() method, not the hundred other doFoo() methods in the codebase.

Also note: yea, this includes all implementations of the interface, all abstract classes, all classes that extend a a class that implements the interface and all other funky polymorphic things, and NO unrelated code. And do it all in one keypress, don’t have me manually go through line by line.

Any ideas if this is possible now?


If you keep up with the LSP space, this is now possible with Vim

https://github.com/eclipse/eclipse.jdt.ls

https://github.com/georgewfraser/java-language-server

Are both good examples. You'll need a corresponding client like this one

https://github.com/prabirshrestha/vim-lsp

There are others, but this one is pretty good. Next release of Neovim will have one built into the editor. Frankly, it's a bit of a hassle but once you get an LSP provider set up you can get one for just about any language you're using


I am about as hardcore a vim user as they get and I would never edit Java without an IDE.

That being said the first thing I do after installing IntelliJ is open the plugin settings and install IdeaVim.


Im the same, s/vim/emacs/, though I'm also very comfortable in Vim.

I use Emacs for a tonne of stuff, and basically any other language I use. But for Java, with its deeply hierarchical codebases and general verbosity, there's just no sane way to manage that complexity smoothly without a good IDE.

(One could argue that if a language necessitates an IDE to work with it, then that's a failure of the language's DX. But thats an entirely separate discussion.)


You don't need to choose between Intellj and Vim, because you can use Vim keybindings within Intellj.

So you get all the powerful IDE commands and the high speed of Vim commands at the same time.

It's easy to set up !


My life was also improved with some additional aliases in my git config:

[alias] dfif = diff idff = diff grpe = grep


I'm always puzzled how often the wheel gets re-invented ... https://git-scm.com/docs/git-config#Documentation/git-config...


This is only for git commands though? `diff` and `grep` are their own thing.


Not when they're git subcommands, or misspelt git aliases as above.


Years ago Interlisp (does anybody here remember Interlisp?) had a function called DWIM (AKA Do What I Mean). It was an (optionally enabled) part of the REPL. If you typed something that made no sense, DWIM would try to figure out what you really meant to type, and offered to run the corrected command.

I have often wondered why that functionality disappeared, and why no one has tried to resurrect it. Search engines offer corrections all the time; why doesn't bash?


Quaxity quuxity,

Teitelman's Interlisp

Has a DWIM feature that's

Really a screw;

MACLISP has evident

Superiority

Letting its customer

Mean what he do.

--The Great Quux (Guy L. Steele)

This poem indicates the frustrations that hackers had with DWIM at the time, which may explain why no one tried to resurrect it. Too-clever-by-half features intended to help tend to drive people nuts, especially when they fail. Even when they succeed, they interrupt the user's flow and become like that dialog box Windows users just dismiss.



Zsh has this. But it mostly drives me mad.


bashrc snippet (with extra newlines, for folks on mobile apps like materialistic that don't understand code formatting):

    # I do this an embarrassing amount

    alias fgf='fg'

    alias fgfg='fg'

    alias gf='fg'

    alias gfg='fg'


This resonates... from mine:

  alias emcas='emacs'

  alias emac='emacs'

  alias emasc='emacs'

  alias enacs='emacs'

  alias emas='emacs'

  alias emascs='emacs'

  alias eamcs='emacs'

  alias eemacs='emacs'


I have aliased `emacs --daemon` to `emacsd` and `emacsclient -t` to `e` because I use it so much.


You might be interested in https://github.com/nvbn/thefuck


In a similar theme, I'm really glad I added ":q" as an alias for "exit" in my shell.


I must forget to give grep an filesystem path argument at least 10% of the time I invoke it. What I'm intending to do in all of those cases is recursively grep in the current directory. "Warning: recursive search of stdin" might be my most-seen console error message.


This used to happen to me constantly. I fixed it accidentally in switching to Ripgrep [0] which defaults to recursively searching the current directory. Bonus: it parallelises too!

Honourable mention also to FZF [1] which not only makes it trivial to locate a file in a directory tree, but has revolutionised my history use with its fuzzy matching.

[0] - https://github.com/BurntSushi/ripgrep

[1] - https://github.com/junegunn/fzf


Nice! I create two-letter aliases, it truly helps:

gc = git checkout

gs = git status

etc...


Here's mine:

    alias g="git"
    __git_complete g _git # enable git autocompletion (braches, etc.)
    alias gc="git commit -am"
    alias gp="git push"
    __git_complete gp _git_checkout # checkout is more useful than _git_push because it autocompletes the branch
    alias ga="git add -A"
    alias gd="git diff"
    alias gb="git branch"
    alias gx="git checkout"
    __git_complete gx _git_checkout
    alias gs="git status"
    alias gl="git log"


Good collection! I have many of these, plus a slightly longer one for quick fixups that happen all too often:

    alias gcane="git commit --amend --no-edit"


probably you will like

  function fixup() {
    git commit --fixup=$1
  }
  function refixup() {
    git rebase -i --autosquash --autostash $1^1
  }


I have a ton of aliases but I have them all proceeded by an underscore. That way, I don't muck up native commands.

alias _up='sudo apt update -y && sudo apt upgrade -y'


Nice, I have those same aliases.

Also not afraid to add multi-letter aliases if I find myself typing the same multi-word command over and over.

For example git diff master HEAD becomes gdmh


alias g=git

And then define one and two letter aliases for the things you do often:

st=status

l=log --with-prettiness

ap=add --patch

shit=reset

co=checkout


heh, I should make an alias for no-break space (code point 160/00A0) + 'grep', because I type it so often when I pipe and get:

Command ' grep' not found, but there are 17 similar ones. Maybe I'm not the only one :).


I handle this case with auto-correction for all one off errors via <tab>.


> start with Clojure, not one of the less-practical languages.

doesn't expose you to typed functional programming (the ML school) though.


THIS. Especially Clojure. If you want to become a better JavaScript programmer, definitely dabble in Clojure.


for those wanting to brush up on their regex skills, here's a nice tutorial:

https://regexone.com/


The Jetbrains suite. Lightens the cognitive load, makes it easier to refactor and keep code tidy. All of which allow me build better software.

For almost everything else e.g. git, learning how to use the command line instead of a UI is the best way for me to learn how the tooling works.


I'm dependent on Jetbrains IDE for most of my work. It really shines in showing the user best practices and recommendations. A lot of the programming concepts are same across languages and this IDE helps you find the right function/method with its suggestions. And at least for java it'll suggest you variable names, it'll suggest you if a loop can be converted to a stream, it can generate templates for unit tests, it can tell you if a variable might be null, etc. There are a ton of small small features that all add up to a great experience. 10/10 would recommend for a new programmer.


Yes, I would agree with that. If you make websites there is just nothing better than PHPStorm. It is one of the program that just works and makes your life a hell easier. Also a huge shoutout to 'lazyone' on SO for always answering my questions. It's one of the rare companies that actually understand what you're saying instead of just pasting canned replies.

BTW I'm not affiliated with them but they're having a 50% discount sale right now.. iirc 25 birthday sale. They seldom give out discounts, so this is a great time to upgrade too.


Hmmm, I'm not seeing the discount pricing you speak of.


I guess it's only for PHPStorm and ending it about 14 hours from now

https://www.jetbrains.com/lp/php-25/


Oh darn, I already have WebStorm and PHPStorm. Cheers.


The discount applies to renewals too.


Looks like it’s specific to PhpStorm, and not any other JetBrain products.


I agree. I’ve been using IntelliJ ultimate for a few months now and I don’t see myself going back to sublime or emacs. I do work mostly with ruby, typescript, golang and occasionally clojure. I recently found out that with the Ultimate version, you can install all the language plugins and won’t need the other IDE like Goland, rubymine, webstorm etc... They’re just plugins. So all your settings can live in one place


It's crazy how many disparate technologies Ultimate integrates for you. Here's a reddit post I wrote a while ago that lists nearly a dozen different technologies/contexts that I use IDEA for in a single project: https://www.reddit.com/r/java/comments/by2ow0/do_you_use_you...

And that's not the "toolbox" license suite - I do all that for the price of a single Ultimate subscription.


License for Ultimate is cheap enough that if you're going to buy 3 or more of their product, it's better to just buy Ultimate (assuming the tools are provided by plugins in Ultimate).


The way I've seen the 'better software' show for myself is that I see problems that people walk away from that are just too complex or diffuse and their priorities just don't make it worth the struggle.

Tools that eliminate even a little bit of cognitive load moves the point of no return a little bit farther, which means there are kinds of problems you can touch that someone else won't or can't.


Seconded, although to be fair the choice of IDE doesn't really matter as long as it's relatively good. When I moved from vim to Jetbrains, one of the biggest things was seeing all the small errors including spelling mistakes. Being able to easily see and fix minor syntax errors or things like missing variables etc really makes a difference, especially when you are working on a codebase where that was missing for a long time.

If anyone is an emacs/vim user it's certainly worthwhile to enable similar error reporting plugins to get the same effect


The arrival of Language Server Protocol is going to make IDE-like functionality more evenly spread across traditional text editors, "modern" text editors and "IDEs". In Emacs I recommend eglot: https://github.com/joaotavora/eglot


I used to be very gungho on text editors (started my career with Sublime, moved to VSCode later), but I've turned the other way and use Jetbrains products now.

LSP is decent, but I've yet to see any languages with the depth and quality of Jetbrains IDE support, by a fairly large margin. I've had to fight VSCode settings many, many times in to world of Go, but Goland "just works" for the essentials - intellisense and code navigation is just so much better. Python and Typescript are probably the best-supported in VSCode, but they still don't meet the mark. Rule applies doubly so for languages that aren't strongly typed. Breakpoint debugging for code and tests is similarly hands off.

They add all sorts of ecosystem-specific know how to make the experience smooth, e.g. Rails, Rspec in RubyMine.

I still use command line tools for every other part of my workflows, e.g. Ripgrep, git, dependency management, but I haven't found anything else that compares for coding with really excellent intellisense and code nav, other than Visual Studio proper for C#.


Yes I definitely believe everything you say. I will say that currently I'm using LSP (Emacs Eglot) for Python (pyls) and Rust (rust-analyzer) and the difference is (unsurprisingly!) night and day. I love Jedi and I'm sure Palantir did a decent job but... but rust-analyzer on the other hand, I wonder if in a year or two, is it possible that Rust in Emacs might not be so far off the best "IDE"?


Back then (10+ years ago?), IDEs were meant to be bloated and slow having convoluted interfaces and I only used it when absolutely necessary (like Eclipse for Android before Android Studio became the default) but my, JetBrains saves me so much sanity, it changed the definition of IDE for me (or rather IDE was meant to be like that). I only used lightweight text editors before then. (Used SublimeText before the switch.)

Great thing about JetBrains is they work out of the box unlike vim where you need to spend a whole month customizing just to get back on working on the project with 20 random plugins out there and it's still probably only 30% as good as JetBrain's.

VSCode is being developed rapidly and I see many good plugins but it's still quite far behind except for the launch speed. At least it could be seen as a competitor to make sure JetBrains will keep innovating and keep the performance of their IDE sane not to lose customers to VSCode.


VS Code is like 90% good enough for me compared to Eclipse. I was evaluating IDEA and VS Code to switch to and VS Code with its Java extensions was an easy winner considering its launch speed and the responsiveness. IDEA even though tons better than Eclipse still feels clunkier than VS Code.


Unbelievably amazing to be able to shift-click on syntax and jump to the source - even works pretty well in dynamic languages.

I HATE using a Gem then needing to break my train of thought to pull up the official documentation to see what an API interface looks like. Ctrl+click. In and out in 15 seconds.


You just taught me something! I've been using keyboard hotkeys to jump to source. On mac, I can hold down command (probably ctrl on linux) and it lights up like a link, shows the gist, and let's me click to go to source.

This is why I love these sorts of threads :)


Ctrl-B also goes to definition (at least on Linux) if you don't want to leave the keyboard. On definition Ctrl-B lists all usages of the term under caret.


After a while, you'd think why this isn't the default in other editors. (At least VSCode jumps to obvious sources.)


Seconding Jetbrains products. They’re not the lightest weight text editors and have something of a learning curve. However, they have done a lot to improve my productivity


Came here to say the same. It seems to be the most intelligent IDE I have used so far in that it can understand what you are trying to achieve (well, most of the times) and help you write better code. I learnt a lot in Python just by following IDEA's code reviews and trying to understand the rationale behind each suggestion. And it really shines when you are developing in a JVM language like Java or Scala.


Same. I started using it for Java long ago and now just subscribe to the "all you can eat" license [1]. $249/year, and 100% worth it so that I can pop open pretty much anything in a familiar, well-tuned interface.

[1] https://www.jetbrains.com/all/


When I found the jetbrains Ides I really started to enjoy programming again. I could think about what I was trying to accomplish instead of getting bogged down in programming overhead. Also paid for the toolbox, worth every penny.


Interesting that you prefer both extremes of UI design.

Jetbrains UI that shows available options or pops them in real time.

And command line UI; requires reading the docs to learn options, but can be powerful and chained with other commands via pipes, ez automating UI interactions into scripts, etc.


This is myself as well, enjoying the extremes. Having all the whiz-bang guidance is great and really speeds up my work, except when it doesn't work. Then it's great to be comfortable falling back to something rock solid and unbreakably simple. I never have to "fall-back" multiple times in successive frustration. There is one fallback and it ALWAYS works.

To me it's not worth learning the "in-between" tools for the extremely limited circumstances that I'll need something less than IntelliJ but more than vim/bash. I even hesitate to customize my vim much because I need to rely on it as a fallback on nearly any system, and on novel systems I can't rely on my customization.

I'm not dogmatic about only sticking to these - I'm comfortable with VS Code when it's what my employer/workgroup provides, and I'm comfortable with Sublime, as it's particularly portable (can be run off USB). So sometimes Sublime is the fanciest option.

But anything in between JetBrains and vim needs a real "reason" to bother investing the time to learn.


Just downloaded the GoLand 30 day trail. First impressions are I now remember how slow and stuck in goo java based IDEs feel.

What cool stuff should I look at as I am willing to be sold on this.


I have found most Idea performance issues are either, configured Java heap space, or indexing of project files that could be ignored. The default memory settings are generally pretty conservative, some larger projects run into issues immediately. If your system has plenty of ram to spare, I would recommend just giving it a few gig and seeing if things improve.


I don’t notice slowness on it personally. Maybe you are noticing the initial indexing it does. That doesn’t happen often with GoLand.

Make sure you enable Golang modules in the settings also.

Anyways just mentioning some stuff off the top of my head I enjoy:

Debugging is a great experience.

Can find a plugin for most anything, I use the Kubernetes one for syntax completion and documentation. (alt-q I think)

Also you can create .rest files and compose http requests and trigger them right in the files, which I thought was cool.

The documentation pop up by hitting alt-q in general is pretty cool. Don’t have to run over to godoc.

Then most things you’d expect from an advanced ide. Multi line editing... jump to definitions and implementations... Project wide code search and replacement


I'm not sure about Go's specific characteristics, but for PyCharm I love full project semanticly analyzed code navigation and remote step-through debugging. I use visual studio code for most JS stuff, but it's awful for wrangling many files simultaneously and learning a big codebase. Webstorm allows me to search for function calls and other things in the entire codebase much faster. I prefer VSCode's git UI and use both apps.


I can compare it with VSCode. Goland is much better at working with multiple Go Versions, which is a big thing in my daily work. Other things that it does better than VS Code are auto-generating unit tests, refactoring function signatures, better package management support, easier to set-up different build/debug profiles ...


Jetbrains offerings are quite fast. What are you comparing them to?


Probably Sublime text. It's just too fast and I can't switch to anything else. I open heavier ones like Eclipse, VS Code etc. only when I need to refactor.


I notice the same thing, and I enjoy IntelliJ / Sublime / Vim. IntelliJ often feels sluggish.

Maybe this could help: https://blog.jetbrains.com/idea/2015/08/experimental-zero-la...

Here was a 3rd party analysis from 2015, showing a reduction in latency when editing XML files in IntelliJ IDEA from ~70ms with large jitter to 1.7ms with small jitter: https://pavelfatin.com/typing-with-pleasure/#summary

I wonder if it's turned on by default today, 5 years later.

Some other techniques: https://medium.com/@sergio.igwt/boosting-performance-of-inte...


Seems like the IntelliJ zero latency typing is on by default since ~2017 releases.

https://blog.jetbrains.com/idea/2016/12/intellij-idea-2017-1...

https://blog.jetbrains.com/clion/2017/01/clion-starts-2017-1...


at least for JS, the refactoring tools save a lot of headache. write code before deciding on variable or function names and then one click to refactor everywhere in the codebase.


Goland is a Java-based IDE.


I'm a die hard Emacs use but when I need to bounce around a codebase or refactor, I jump into the various jetbrains tools. Friggin awesome


You aren’t die hard enough.


That would easily be using an integrated REPL.

The more integrated it is (with your IDE/editor) the better the experience and productivity boost.

And the difference is quite large. When you are working with a language that has first class REPL support you start to

- 'get' why Emacs exists

- become faster at writing code in general

- write much more experimental and open

- become more motivated in testing smaller assumptions and asking questions about your or other peoples code

With "first class support" there are three dimensions:

(1) The REPL and the editor/IDE have to be understand each other well.

(2) The language itself has to be malleable and (de-)compose well in terms of syntax and idioms.

(3) many things in the language are first class expressions or in other words: there is a high degree of tangibility in the language constructs.

Most dynamic languages have workable/decent support for REPL driven development so it is always worth testing out.

You find excellent support in: Clojure (and of course other Lisps) and Julia from my experience.


I completely agree with the point about integrated REPL/IDE, and wanted to share some of the combinations I have used in the past, since it can be a concrete getting started point for those who are curious. Some of these are not literally repls, but IMO give a similar experience.

- ClojureScript with Figwheel and the web browser

- Clojure with Emacs Cider, Clojure with Cursive

- R and Rstudio

- Matlab

- ipython jupyter notebook

- Pycharm debug breakpoints that are triggered by unittests (Running the unittest to initiate a python repl at the breakpoint)


What I really love about R/Rstudio is that you can highlight a few lines of code and execute them in isolation


My main responsibility is developing and maintaining a microservice written in Java.

However, one thing I've found invaluable over the years is developing operational tools to support my deployed code, in a language with a built in REPL.

At different times, I have used Clojure and Jython for this (high level way to call my Java libraries, or to invoke APIs over the network), and most recently Ruby (has been easy to deploy and run scripts or ad hoc commands over irb for operational tasks, in the same environments where my service runs).

This allows me to build up code over time that I can use to

* Quickly make calls to my service to triage or debug production issues.

* Write scripts to quickly validate a new install.

* Script operational tasks at a high level that doesn't make sense to build into the service itself (can allow the service to be more stateless, for example).

* Bypass layers and make calls to the underlying database (can be more powerful than the command line tools dedicated to a specific database).

* Can be more powerful and composable than curl or Postman for making web calls.

* Have used it to analyze the state of messages in a Kafka topic (with a custom gem).

So I highly recommend building a tool set around a language with a good REPL for anyone responsible for a service with a REST API, or any other kind of API available over a network.


Have you tried Groovy? Sounds kind of ideal if your main work is with Java. I have a BeakerX (JupyterLab+Groovy) notebook open pretty much continuously next to my IDE while I code so that I can validate all my assumptions as I code. A nice workflow is that you start with a snippet of experimental code, which you tidy up in the REPL, then it splits into the actual function and test code, one part going to the unit test and the other to the real code.


Yes, this is extremely useful, thank you for sharing.

I also often use the REPL as a "tool" rather than just a editor feature.

For example this week I'm working on a data integration. This is a very specific one-time task, so there is currently no need to write accessible production code. I can just use the REPL to do the "ETL" and leave the code as-is.

There is merit in keeping it and find functionality to extract, abstractions/compositions in the future, but the point is that the integrated (Clojure) REPL itself is already sufficient, powerful and very ergonomic tool.


Writing the overall design in plain English before writing the implementation. Not super detailed, but the main data structures and invariants and mechanisms that will make the implementation working. Then start the implementation refining such document as I discover new things.


This, most of all. Substitute native language if not English; the important thing is that the project be defined and developed in both a human language and a computer language, so that mismatches can be identified and resolved.

  * Start by describing what you are trying to do.
    * Specifically.
    * Not 'build a web business to enable users to achieve their
      potential', not 'create another library for X but simpler'
      but *specifically what the software will do* and, most
      importantly, the priorities of scope increase (it'll happen
      anyway; just get the priorities and their order down in
      text ASAP).
    * Put it in a readme.txt or something.

  * For any given subsystem of the code (meaning: any section
    which can sort-of stand on its own) write another such file,
    in more detail.

  * Let these files guide your tests too.

  * Keep them up to date. If priorities change, *start* by
    updating the readmes. The code isn't immutable; nor is the 
    plan. But the plan comes first.

  * When unsure how a new subsystem or feature is going to work,
    write out your ideas and thought processes in text. Don't
    start coding at all for a day or two after you sketch out the
    basics. *Append* to this file instead of replacing large
    parts.
[edit] Wasn't intended to quote that part (sorry to mobile users) but I can never remember how to get bulleted lists on this site...


I found that combining such approach with writing down basic interfaces works really well - after i have a rough written idea i iterate over interface design with full descriptive comments of both the interface, and the methods.


Have any samples/examples you can share?


Things like that https://gist.github.com/antirez/ae068f95c0d084891305. Usually more detailed with data structures, but this one was a very conceptual thing.


Absolutely this. When facing a tough problem it’s a great tactic to write out what you’re trying to solve and how you plan to solve it in prose-style English.

I’ve done it often without ever sharing my writings with anyone at all, and always felt that the code turned out relatively good as a result.

I’d also add that a very prototype that you throw away also helps line up your thoughts.

The key is having something reasonably concrete in front of you that forces you to think of the invariants, compromises, etc in the system. Before making all those decisions concrete by writing loads of code


Leslie Lamport thinks the same:

https://dl.acm.org/doi/fullHtml/10.1145/2736348

Even a rough sketch is good enough a lot of the time:

https://m.youtube.com/watch?v=-4Yp3j_jk8Q


This is the most useful post so far. I take it a step further and make a diagram in draw.io to understand exactly the data that is coming in and out. This is especially important for working with legacy code where you might get random crap like a name instead of an id and that could throw off your design.


here is another useful tool to draw graphs/flowcharts: https://whimsical.com/


This is coming from creator of Redis so we better listen :)

Thank you for your work Salvatore!


Exactly this one.

I don't get why the top comments are all about some technical tooling, as if the major part of a developer job would be typing.


I'm trying to be better at this. Any examples/recommendations you could share?


The way I do it is that I whenever I start working on some functionality and it’s not immediately obvious how to implement it, I open a text file and write down my thoughts as something between stream of consciousness and design document, usually formatted as a multi-level bullet-point list.

I start with what I am trying to achieve and list the different design approaches I can think of, adding advantages and disadvantages of each one as they come to mind. By the time I’ve written down all my thoughts on a design decision, it is often clear to me which approach I favor.

This can be repeated for more and more detailed aspects of the implementation (e.g. “which function should this be added to” or “what to name this function/struct/variable”) until I feel like I can come up with the remaining details as I’m writing the code. If I get stuck somewhere later on, I can always go back and add more details in the text document.

For larger or more important features, this list can be cleaned up and become documentation or perhaps a comment somewhere, but I often find that the writing is a useful tool to get unstuck and to clarify my thoughts even if I end up never reading it again.


Not sure if this is what antirez had in mind, but when I'm working on stuff, I whiteboard (if I'm at home--and who isn't these days) or write in a notebook.

First thing I figure out is how I want to interact with a thing. Whether that's a program or a class or a function. How do I want to call it? What parameters do I want to pass? What do I want it to return? How do I want to use what it returns? So, basically, write the interface first.

If I'm building something with multiple parts rather than a single class or function, I'll map out how these things all work together. A loose graph of interactions. Invoke A and have it return X; invoke B and pass X to it and it returns Y; etc.

Then I'll consider failure modes and think about what should happen if something doesn't work out quite right. Is it possible to route failures to a centralized cluster of error handlers so I don't have to implement error handling at every level?

Finally, I'll think about whether I can map behaviors to defined data structures instead of controlling flow with if/else patterns.

Once I have all that written down or mapped out, then I'll start implementing from the outside in. Stub the object, methods and return dummy data structures that fit until I have a complete system that's interacting the way I want. Then I go in and implement the actual functionality I need.

The last part--implementing functionality--often implies modifications to my initial thought process. But it's easier to understand what those changes affect if you've already mapped your design. So you might think B can produce Y with I parameter. Maybe it turns out you can't. So now you need to add a new param. Where is that going to come from? Well, you don't have to invent that out of nowhere because you've already mapped out what is happening. You know that you need to either add another node to the call graph or change the return value somewhere else.

By the end of the process, you have a working program, and you've also done a lot of your documentation work as well.

Again, I have no idea if this is what anyone else is talking about here. But this is how I personally work. It annoys the hell out of some people. But it works for me and helps me create sane software with interfaces people can remember over time.


Reading official documentation when working with new tools/frameworks.

Googling every hurdle as it comes & over relying on StackOverflow is neither effective nor satisfying. Some of the projects out there have amazing documentation (e.g. VueJS, Kafka). It's best to read the overview & skim high level stuff to understand the core components/principles/terminologies. It makes it so much easier & enjoyable to use those tools.


Why isn't this higher?

Give everything a good read before really working. You don't need to remember everything, but you need to know what's there so you don't wind up reinventing wheels, endlessly googling with the wrong search terms, or doing things people who use the tool correctly find inscrutable. It's so important.

I will note that this is much easier to do well when you're a more experienced engineer than it is when you're just starting out, but getting used to doing it and going back over docs when you do have more experience is the best way to get used to it.


I think the main problem with this approach is the all-or-nothing problem. When reading official docs is not always clear when you already have everything that you need. Reading the whole doc is usually not a possibility considering most modern tools have easily 100+ pages that go down the rabbit hole.

In short, good official documentation is scarce and time to read is even more so.


It is even better if you draw diagrams on dotted paper while you read.


For me, the transition from Bash to Zsh has been a huge efficiency boost. Mainly because of some great plugins for Zsh, such as z, zsh-peco-history (better history search), zsh-autosuggestions, and zsh-syntax-highlighting.

My blog post about setting up a Linux workstation describes this in detail: https://tkainrad.dev/posts/setting-up-linux-workstation/#swi....

The best thing is, there is no initial productivity hit. You don't miss out on any shell features that you are accustomed to.

Also, learning complex IDE features really pays off. At the very least, become familiar with the debugger.

Finally, I spent the last months making https://keycombiner.com/ in my spare time. It is an app for learning keyboard shortcuts and getting faster/more accurate at using them. It already made me more productive because I learned a lot of new shortcuts and found some problems in my typing accuracy.


For zsh, I highly recommend zsh-histdb, it stores all your commands in a sqlite database, along with data like timestamp, current working directory, hostname, session id, etc...

It has its own "histdb" command but the best part is that it integrates well with zsh-autosuggestions, so with the right SQL query you can make it suggest something "the latest command in the current session that matches, or if not found, the most frequent in that directory".

I know it is controversial because it is not using a text file and UNIX loves text files, but it really nice, and you still have your .zsh-history if you want to.


histdb is one of my top favorite tools. And I've come to the conclusion flat files can only get you so far. sqlite is probably the next best thing to flat files, and I think history is one of those things where switching to a db is an immediate win.


I would highly recommend checking out fzf for better history search. Found the recommendation on another similar thread here and from coworkers. It's surprisingly fast and very intuitive.

https://github.com/junegunn/fzf


+1 to this. I've never been a fan of how <ctrl r> works in bash, and fzf makes it soooo much better.


Both this and histdb look quite interesting. That being said, I never had the feeling that I am missing anything with zsh-peco-history.


Wanted to thank you for writing your blog post. I have been on Linux full-time since about july 2019 and found your post some time after that. Your post really kickstarted my productivity in Linux.


Glad it helped you! Thank you for the nice words :)


Today I switched from zsh to fish and I'm already much happier. Was using zsh for 4+ years, too


What benefits do you see in fish when compared to zsh?


I would say fish is to zsh like what zsh is to bash.

More seriously, for a start: good defaults, highlighting and autosuggestion built-in, parameters search with help and completion... (but it's not Posix).


Regarding POSIX, I've been using Fish for about 4 years now.

POSIX always comes up, how it's a deal breaker.I want to mention that I build all my scripts as POSIX as I can or using Bash extensions. You keep having Bash/ZSH on your machine, so you can still use your scripts and don't miss anything. Shebang's keeps working,

    #!/bin/sh
    #!/bin/bash
    #!/bin/zsh
Personally, I actually don't change my default shell (chsh step). I simply set my terminal to use the fish command instead of invoking the default shell.

    - Gnome Terminal, there's a Title and Command tab. You can set a custom command there. Just put the path to fish
    - Terminal.app, Preferences > Profiles > Shell > Run command
    - iTerm.app, Preferences > Profiles > General > Command
    - Tmux, on your .tmux.conf `set -g default-shell /usr/local/bin/fish`
It's more portable for me that way.


I was also interested about it and from my research, Fish has two major differences:

- it enables the cool functionality out of the box, so unlike zsh you don't need to have large configuration file to enable everything

- it is not afraid to break bash compatibility to fix confusing scripting issues, so fish most likely will fail when executing a bash script, but writing scripts in fish should be more enjoyable


I also want to thank you for this blog post. I'm a long time linux user, but just got a new machine and decided to start from scratch rather than try to port over my previous environment. Looks like just starting from your post will save me some time.


Thank you!

I am currently setting up a new desktop myself and will soon update the post regarding Ubuntu 20.04. However, this will only be very minor changes, almost everything still works exactly as described :)


An actual debugger. I started as a PHP/WP dev and spent many hours running results through echo or var_dump. IMO the debugger is the absolute first thing you need to learn about the platform you're writing for. Without it, you're taking shots in the dark and you truly don't know how your code is executing.

It seriously pains me to see people not using one. I have a friend who is taking an online PHP backend class. There was one lecture on debugging, and all it consisted of was "here's what using var_dump looks like". I showed my friend how to actually set breakpoints in their JS code, set watches, etc and they felt cheated by their class. They should.


So much this. And command-line debuggers are usually awful. Just being able to set a breakpoint in the actual text file you're editing without using a different tool, and stop when a condition happens can speed up your workflow so much.


+1 VisualStudio w/ C#/.NET was very eye opening to the power of live debugging code, ability to evaluate expressions and introspect vars

EDIT: I originally said VSCode, but i meant the OG Visual Studio


I've not tried code for .NET yet. I generally like the normal-IDE-ness of Studio


I'm really confused by this. Don't debuggers come embedded into any IDE worth using?

How could someone start programming this century without access to one?


In the PHP world I'd argue most developers don't use an IDE, let alone a debugger. Setting a debugger up with Xdebug or ZendDebugger is also not easy for those less experienced with setting up the actual PHP environment.


Any profiler.

As a development tool: You can default to writing the majority of your code dumb, terse[1], and straightforward, even if you know there's a clever algorithm you might be able to use, because that way is easier to debug. Computers are fast and N is usually smaller than you think, and when you apply the profiler you'll find out that that the biggest performance problem isn't the thing you were going to optimize anyway.

As a product tool: People are more likely to buy responsive programs. The state of modern websites is so bad that non-programmers will actually comment. Every tester for Space Trains[2] commented on how smooth it is. That's a game, but I've seen the same comments on productivity software I've written.

[1] As in omitting layers, not as in omitting descriptive variable names.

[2] https://www.youtube.com/watch?v=LRJP0tie-30


By layers, do you mean interfaces and abstract layers?


Yes.

Some abstractions are good, but I've seen many projects where an abstraction layer or interface is added that does the exact same thing as the code behind it. Or even more often, there will be one or two specific functions in the layer that does actual work, but it would've been fine to just write a helper function for that and not wrap everything else. It's actually pretty rare that whatever needs to be abstracted covers an entire conceptual area to the point where thinking of it as a "layer" makes sense.


TabNine: https://www.tabnine.com/blog/deep (the code are GIFs which I had to click to play)

This thing is fucking magic.

It's ML autocomplete, with help from 'traditional' autocomplete methods that use static analysis. Instead of just completing one token at a time like traditional completers, it can do entire sentences or multiple lines of code in one go, and is freakishly accurate. And since it parses language it helps you write comments, and can understand relations between code. E.g. if you are writing 2d movement code and you do x += dx it'll automatically suggest y += dy for the next line based off of previous similarities; of course if you have x += [complex math formula] it'll fix it up for y and convert cos to sin, etc.

Support for many editors, and easy to install in vim. Free for personal use. Works for all languages, including plain English (and maybe other non-code languages?).


I tried this when it first came out but it didn't seem much better than PyCharm's usual suggestions (which are admittedly excellent among its IDE peers). I rarely to (maybe never?) saw it do any multi-line suggestions, let alone accurate ones. It was also very slow to suggest anything in the first place (I believe the network calls were slow iirc, at least compared to local/native autocomplete)

Maybe its progressed and I should try it again today.


Not sure when you started, but I've been using tabnine in Pycharm for a few months and it is absolutely mindblowing. I've had long line autocompletes (no multi-line) and often time it suggests things I may not have thought of, "now that you mention it, I DO want that idiom". It's snappy enough for me and I am not exactly a patient individual.


I love TabNine but had to stop using it because each instance can use 3GB of memory... Way too much for an autocomplete extension

https://github.com/codota/TabNine/issues/43


It's only too much if you need that memory for something else though. If it manages to be responsive with that memory and you only have 1 ide open, I don't think 3gb should be a problem on a modern system.


It always starts with this premise, suddenly every application no matter how silly demands 3gb. I understand that progress often happens by putting more attention to other aspects than economics, but again some people may value that progress less.


It uses _so_ much memory. I recently added an extra 16gb stick to my laptop, maybe I'll give it another try.


Disappointing to see the pricing model has changed; there used to be a license for unlimited project sizes, but now appears to be a $15/mo sub for their paid service.

Understandably, this was before they transitioned from on-device models to more complex, larger cloud models.

The free 400KB limit is quite generous, but you may need to spend time tuning the ignore if you have junk in your project folder.


Similarly, Kite [0] for Python and JavaScript. I actually prefer Kite to TabNine, but ymmv.

[0] https://kite.com/


Kite got criticism for tracking and injecting ads.

https://qz.com/1043614/this-startup-learned-the-hard-way-tha...


strace.

Even after having learned many programming languages and contributed to various projects, it was only when I started using strace that I felt like truly, efficiently understand what any program does, and can reliably write programs that do things fast.

I believe that "syscall oriented programming" (making your program emit exactly and only the right syscalls) results in clean, understandable software.

Now I use strace every day, as it is often the fastest way to figure out problems with any tool, written in any language, open-source or proprietary.

- Something hangs? strace shows how it's hanging.

- Computer is slow? strace will likely show who's the culprit spamming syscalls.

- "Unexpected error occured"? Bypass programmers doing poor error handling, and witness the underlying "no such file or directory" directly.

Last week I even used strace to debug why my leisure-time computer game wouldn't load a mod, and as usual, strace did the job.

strace helps to really understand computers.

If you want to learn more about strace, check out Brendan Gregg's pages like http://www.brendangregg.com/blog/2014-05-11/strace-wow-much-..., my presentation for an intermediate-level example (https://news.ycombinator.com/item?id=16708392) or my project to build a programmable strace alternative (https://github.com/nh2/hatrace).


It's so disappointing that dtrace is neutered by System Integrity Protection on MacOS. When I want to do this I have to stop and transport my workload to a server or VM, which may or may not reproduce the issue.


strace is the first thing I skimmed down the thread for. You can learn a lot about how things work (or aren't working) by getting really familiar with an strace. Some of my coworkers give me grief for how easily I jump to "let me see an strace" but it works.


Do not use strace. Use sysdig which is superior in just about every way.


sysdig is useless without installing nontrivial performance-impacting instrumentation, cannot handle non-IP networking, does not fully report all syscalls, has a license with patent crap in it, has gated features behind a paywall, and cannot inject syscall faults. It's not even in the same class of tool as strace at this point.


> sysdig is useless without installing nontrivial performance-impacting instrumentation

Most reasonable people reading this sentence would come away with the conclusion that strace is fast, whereas sysdig has some inherent overheads. In reality it is strace that has performance and other problems which make it completely unsuitable for production use (strace will slow syscall heavy code down by a factor of over 100; sysdig won't). Sysdig, on the other than can definitely be used in production and I always found the performance overhead minor. Can you point to something showing otherwise? BTW, newer versions of sysdig do not require a kernel module, thanks to eBPF (but I have not used this).

> , cannot handle non-IP networking,

What is an example of a networking related query you can do with strace but not with sysdig?

> does not fully report all syscalls

Can you expand? Are you referring to the fact that sysdig will drop traces if the userland client cannot keep up (which is a feature and not a bug, and something that all production grade tracing tools do)?

> , has a license with patent crap in it,

As far as I'm aware sysdig's core is Apache licensed and the user scripts are MIT and GPL licensed. Apache has a patent grant, which seems better than not having one. What is your specific beef?

> has gated features behind a paywall,

What features that strace offers are behind a paywall in sysdig? What's wrong with a company that provides a tool that massively advanced the (linux, pre-eBPF) state of the art as open source for free to all also provide some paid offerings on top?

> and cannot inject syscall faults.

This is indeed a useful recent-ish feature I did not know about so thank you! But there are other ways to do it, and something that's orthogonal to the core tracing functionality.

> It's not even in the same class of tool as strace at this point.

Indeed -- the only reason to use strace at this point is because you already know it and it is likely available. This may change if strace switches away from ptrace, but for now it is a joke. If you want something that just does strace, but much better (minimal overhead, powerful and intuitive query language with CLI autocompletion) use sysdig. If you want to use the most general and powerful tool that can tell you lots of other stuff besides syscall usage (but has a much worse UX) look at eBPF and perf. If you want to be a serious performance engineer or similar you will have to learn it, but I suspect for most people sysdig has the best ROI. Perf and dtrace are both (far) more versatile but, IMO, (far) less pleasant to use.


If you consider processes as tools, there's one that I suggest to junior programmers bucking for responsibility/promotions.

Twenty minute cleanup. Nobody is really going to notice if you spent 5 hours or 5:20 on a task. As you're closing up and getting ready to push your changes, look if there's anything you can do to make it look or work nicer.

Eventually you start incorporating some of the lessons learned doing this into your implementations.


+1 to this. I've developed the habit of reviewing my PRs before publishing them. When you assume the role of a reviewer, you end up catching a lot of little (and sometimes big!) stuff, reducing the total turnaround time.


+1 to self code reviews, I almost always find stuff I've missed or could have done better.


I often comment on my own PRs to explain alternatives or tradeoffs I considered. These aren't necessarily worth capturing in permanent documentation or TODOs, but can share knowledge or build confidence that I've considered various angles that might come up in a review.

I'll also call out places where I'm not happy with the implementation, looking for feedback, etc.


For many of the tools that have improved my productivity, it was not the tool that was the breakthrough but the realization of the tool’s value. For example, version control has existed since the beginning of time practically, and I begrudgingly used RCS, SCCS, VSS, and probably other version control systems for ten or fifteen years until I had that Eureka! moment (coinciding with Git’s release, roughly) that inspired me to actually embrace version control tools. A similar experience happened with automated testing: I’d gotten the testing-is-good bondage and discipline spiel many times, but it wasn’t until I started writing extensive units tests for language parsers that I realized how wonderfully empowering they can be.

That said, along with Git, I’d list Gdb (or LLdb or any real debugger), Emacs keyboard macros, Python’s venv facilities, and Django’s database migrations among the tools that changed my life.

Somewhat consistent with the it’s-not-the-thing-but-the-realization-of-the-thing’s-value theme above, I’d say reading the Practice of Programming back in ‘99 took my programming productivity to a new level, because it made me realize that one of the central tasks of an abstraction builder is creating a language that allows you to express thoughts in terms of that abstraction. Once you’ve done that, you “just” need to implement the language and all of the problems expressible in it become easy, even trivial.


Using multiple programming languages.

Using Golang has helped me create better data structures, and using C helped me understand linking, using python helped me understand closures, and using Ruby helped me understand that I hate programming.


can you elaborate on ruby? i'm curious to understand your experience with it and what brought you to the conclusion that you "hate programming".


I'm not the original person so I have no idea what their experience is, but I feel pretty much the same. Every time I work in a language other than Ruby, it feels like programming. When I work in Ruby the language feels almost effortless. I wish I could do everything in Ruby and it frustrates me when there is something I can't make work in Ruby. Especially Rails, everything that works works so smoothly. When it doesn't work I get very frustrated and switch to another language only to find out its even harder in another language.

I don't like programming. I like it when the computer does what I tell it to do. I find that drastically easier to accomplish in Ruby (especially Rails).


What are your thoughts on Crystal?

See: https://crystal-lang.org/


Would you like to pay now (compile time) or later (runtime)?

We have ways of dealing with scaling of runtime in production (develop infrastructure environment).

There's no way I know to speed up the edit/compile/run loop during development.


It's ruby but with extra steps


Could you expand on the Ruby part? I'm not sure if you hate all programming or programming in anything other than Ruby.


you should take rust for a spin :)


The vi mode for Bash. Blew my mind when I discovered it and it probably saved me hundreds of hours already. I used to have multiple copies of this cheatsheet [0] at my desk for every new developer I would see editing a terminal command with the left and right arrows.

[0] https://catonmat.net/ftp/bash-vi-editing-mode-cheat-sheet.pd...


I dunno, I don't think it really gets you more than just adding some basic mappings

    # mappings for Ctrl-left-arrow and Ctrl-right-arrow for word moving
    "\e[1;5C": forward-word
    "\e[1;5D": backward-word
    "\e[5C": forward-word
    "\e[5D": backward-word
    "\e\e[C": forward-word
    "\e\e[D": backward-word

    ## arrow up
    "\e[A":history-search-backward
    ## arrow down
    "\e[B":history-search-forward


All of those require me to shift my hands away from the home row of the keyboard though. The real magic of vi-mode is that everything you need is right there under your fingertips 100% of the time. Well, except escape, but that is why I map caps-lock to escape ...


Ah, interesting. That's something I've never even considered before. My hands seem to just move naturally back and forth without thinking. I know that people have brought up having to move back and forth between keyboard and mouse as being a pain point, but never thought about having to move out of the home row as one as well. For me, moving back and forth between keyboard, touchpad, mouse just seems second nature. I do wish I was better at dual-wielding keyboard and mouse, though, so I've been looking into mirrorboard.


Holy shit. Thank you.


magit (https://magit.vc/) - a git interface for Emacs. Hyper-interactive and ergonomic, feels like vim for git. Highly pleasurable to use and makes you significantly more efficient.

SLIME (https://common-lisp.net/project/slime/) - a Common Lisp development environment, also for Emacs. Comes with a REPL, interactive debugger (way better than gcc), the normal IDE-like experience you know and love, and this fantastic tool called the Inspector that is basically an interactive, modal editor for your program data. The Inspector is one of the most novel and useful tools that a development environment can have... and I've never seen another IDE that has anything resembling it. SLIME gives you a highly interactive and fluid development experience that I've never seen anything else come close to.

Spacemacs (https://www.spacemacs.org/) - a starter kit for Emacs that gives you a sane out-of-the-box configuration and an optional pre-configured Evil (vim keybinding emulation) setup. Much more flexible and powerful than Vim (elisp, while a bad general-purpose language, runs circles around vimscript) and much better ergonomics than vanilla Emacs (Emacs' mode-less interface is just straight-up worse for your hands than Vim's modal interface).


If your after specific tools in a workflow...

- the :normal command in Vim and Evil. - learning to use tags to navigate code.

But more generally learning how to take something seemingly complex like Linux or Git and then delve in and read the code and understand how it actually works. Learning to read good technical books and manuals and understand how something was designed and how it was designed to be used.

Colleagues think I work magic; in reality I'm just as thick as they are, I just RTFM.


hahaha...that's my 'secret'

recently I've been paralyzed by the amount of development in ML/DL/RL. Often I have to step back and remind myself to focus on the fundamentals.


I'm going to re-interpret the question more broadly than just "tools" (unless you consider a technique to be a kind of tool):

* Taking good notes

* Writing good plans, good documentation

* Sharing updates and coordinating with the right people at the right time

* Understanding an unfamiliar codebase

* Test frameworks

* Different design techniques (pure functions, dataflow programming, etc)

* The terminal and related features (like emacs bindings, or middle-click to paste last selection)

* Firefox/Chrome devtools

* Emacs keyboard macros


I can no longer count on my fingers how many massive coding efforts from our backlog evaporated into nothing because we sat around and thoroughly talked through the actual business cases.


Spacemacs!

But also, writing the documentation as one works through the problem, either in org-mode, or in a wiki.

The older I get, the less I remember, so the documentation is key.


I'd love to get better at quickly understanding an unfamiliar codebase. Do you have any resources I could dig into?


Most of my productivity gains these days has come from aligning myself and my workstation.

- i3wm (https://i3wm.org/) - particularly getting comfortable editing the .i3/config. It's the most significant productivity change I've had since switching to Linux from Windows.

To get into it, I highly recommend this 3-part video series from Code Cast: https://youtu.be/j1I63wGcvU4

- yadm (https://yadm.io/) - a dotfile manager. It's essentially a git wrapper, but it's allowed me tons of freedom tweaking my setup without worrying about persisting it.

It supports encryption and switching file versions based on which system you're on.


fast switching between workspaces + efficient use of screen real estate + great hotkeys make i3 truly great, I have used it for many years and don't want to miss its minimalism


Applications are open for YC Winter 2021

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: