Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Which tools have made you a much better programmer?
385 points by karamazov on June 9, 2020 | hide | past | favorite | 514 comments
Getting better at coding is usually a long, slow process of study and practice. However, sometimes I run into something that's easy to understand and, once I'm using it, feels like I've leveled up.

A few personal examples are: * version control - specifically, reading up on git and understanding the more complex commands * debuggers * flame graphs for performance debugging * good code search

What have you added to your workflow that's made you much more productive?




- GNU/Linux + i3wm; complete control over my programming environment.

- bash + GNU coreutils; seriously, take the time to be able to write helpful bash scripts which can run basically anywhere.

- git; use it even when you're not pushing to a remote. Add helpful aliases for everyday commands. Build a good mental model of commits and branches to help you through tough merges. ( my ~/.gitconfig: https://gist.github.com/jeaye/950300ff8120950814879a46b796b3... )

- Regex; combined with bash and your GNU tools, a lot can be done.

- Vim; modal editing and vi-like navigation can blow open your mind. Explore the existing plugins to accomplish everything you want to be able to do. It's all there. ( my ~/.vimrc: https://github.com/jeaye/vimrc )

- Functional programming; if you're new to this, start with Clojure, not one of the less-practical languages. This has made such a huge impact on the way I think about code, write code, and design code that it's possibly the biggest one here for me.


An excellent list. Regarding functional programming, I recommend starting with a gentle approach that doesn't require picking up a new language:

1. Stop creating counters/loops and become facile with map, reduce, and the like. This will shift your thinking away from blocks and toward functions.

2. Take the time to really understand what side effects are, and start avoiding them everywhere they are not necessary. Keep scopes as local as is practical.

3. When you start toying with functional programming per se, make sure you really have your head around recursion. That's where much of the magic concision comes from.


Agreed on gentle approach. I've started using libraries like Ramda for JS, which is especially designed to ease people into functional patterns.

I then moved to FP-TS, which makes more heavy use of haskell-like patterns and monads.

The hard part isn't the syntax of whatever language, but understanding the new patterns and way of thinking, which you can do with a simulation layer like FP-TS (for typescript/javascript).

The functional patterns and emphasis on types makes your code robust and more correct. It emphasizes correctness, which is ultimately what your job is as a programmer. Optimization comes after.


I too have started down the fp router with ramda and ramda-adjunct.

Though other people at my work dislike it due to them not understanding what's going on


There could be other reasons for disliking it other than not understanding what’s going on. Maybe other people are working on tight deadlines and don’t have the time or mental energy trying to understand an entirely unfamiliar programming paradigm.

I would want to program in functional too, but I would seek out projects or teams that already use functional. I’d never introduce a functional language to an already established team or project, unless of course, I was the CTO and there were clear benefits.


If you practice domain driven design, you can always start out isolated projects or even just new modules if your code is sufficiently modularized. I started out by making a new library/module with Ramda, and then it's consumed via a function call that anyone programming in any style can use.

Agreed that functional patterns can be hard to understand if you don't know the patterns, but if you do know them they are much easier to understand and reason about the code. It's a long term investment, and one that frankly I believe will be inevitable as more and more people start doing programming work.


Had the same result, and had to settle for the middle ground of (lodash | underscore; I don't recall which now.)

There's a lot of resistance to adopting anything with the name functional, and that resistance is often seen in a /refusal/ to try to understand instead of a mere lack of understanding: people put up walls straightaway. I expect many need motivating examples to guide them to it.

Languages like Kotlin are pointing the way towards that middle ground far more effectively than, say, Scala did.


> I recommend starting with a gentle approach that doesn't require picking up a new language

Disagree. This is likely to 'dilute' the lessons of functional programming, as it were. If you learn to program in idiomatic Clojure/OCaml/Haskell/Scheme, you can be relatively sure you really have picked up the principles of functional programming.

If you merely attempt to apply new principles in an environment you're already comfortable in, things aren't going to 'click' in the same way.

Beside that, plenty of languages simply lack the facilities needed to effectively use the concepts of FP, as vmchale says.


I can confirm this. Although I knew the principles of FP, concepts didn't click until I started using a functional language.

In non-FP languages, I didn't originally appreciate the benefits of the pattern. It was more work to do things functionally, so I dismissed some patterns that were actually useful.

I'm a bit biased, but would recommend Elixir as an accessible FP language. It has an accessible syntax and modern tooling.

You may be frustrated with the constraints of immutability for a few weeks, but the benefits become apparent once you're used to it.

Now when I work in non-FP languages like JavaScript, I will apply FP principles when it makes sense.


Some people say learning Latin makes you a better writer, smarter, etc. even if you're unlikely to directly use it. Dubious claims but it feels like FP can be like that.


The thing about learning Latin (not that I am great at it), at least for an English native speaker, and the epiphany English language speakers have, is the realization that language can have structure and can be discover-able, if I know a root word then I can almost be sure that I know the meaning of con, re, dom etc. There is just no equivalent to that in English other than the Latin origin words we borrowed, because we just assimilate any words we like and make up new ones as we see fit. An example would be beaucoup a Vietnamese word but most US English speakers would know it means big. Without the historical reference of a movie you would not have a reason or rhyme why an Asian origin word made it into English. There is literally no reason or rhyme to most of the spoken language and I think that is the epiphany, that some people actually thought out a logical way to create a language and via that logic it is discover-able.


Isn't beaucoup French?



Yes it is - France has colonised part of vietnam


yes, and as Army slang, it made it into American English through Vietnamese


Very cool. Did not know this. Will have to tell my Vietnamese wife. We were trying to figure out if English had any Vietnamese words. I think this counts.


I'm pretty sure in Louisiana it was already in slang through French here


Hundreds of years of French culture through Louisiana.

Other unique French US cultures:

Haitian Creole

New England French were are Canadian migrants.

Missouri French

Muskrat French from Michigan

North Dakoka Metis French

It could have come to some from 40 years old but it was already here for many.

Still it's a French word. It is like going to Germany and learning an English word but calling it German.


Sure. When learning Latin, well, you learn to read and write Latin. You absorb the language's principles by learning the language, not by trying to highlight them in a language you already know. That can only give a much shallower appreciation.


Not quite true - many complexities of grammar are shared between the languages and it is often useful to structure the learning of Latin around the English patterns of grammar.


I imagine that diligently learning a foreign language (dead or otherwise) will make you smarter, especially if the free time spent on it would otherwise be spent on less academic pursuits.

(edit: spelling)


For me I get the most benefits from using FP features. I can write one-liners that are easy to read, and replace 10's to 100's of lines of my code.

For a simple example, val newList list.map(function Call(_))

Instead of val1 = functionCall(1) valN = functionCall(N)


A side note: "Dubious" comes from Latin, sharing the root of "duo", which means two,in this case referring to the possible indecision between two things or ideas.


Even in languages where the concepts can be encoded, it can be hard to determine what aspects of a given library are the encoding and which parts are the fundamental ideas if you haven't seen the ideas used in well-suited language. For instance, I didn't really understand the use of functools.reduce[0] or itertools.starmap[1] in Python until I was familiar with zipWith[2] and foldl [3] in Haskell.

The ideas themselves are not particularly complicated, but I hadn't previously worked with abstractions where the default was to operate on whole data structures rather than on individual elements, so I didn't see how you would set up your program to make those functions useful. In addition, for abstract higher-order functions, type signatures help a lot for understanding how the function operates. I found `functools.reduce(function, iterable, initializer)` significantly more opaque than `foldl :: (b -> a -> b) -> b -> [a] -> b` because the type signature makes it clear what sort of functions are suitable for use as the first argument.

It's now easy for me to use the same abstractions in any language that provides it because I only have to learn the particular encoding of this very general idea. While I couldn't figure out why functools.reduce was useful or desirable, I couldn't figure out many parts of C++'s standard template library at all. But if you already know the core concepts and the general way that C++ uses iterators and the fact that functools.reduce, Data.Foldable.foldl, and std::accumulate[4] are all basically doing the same thing for the same reasons is a lot more readily apparent.

[0] https://docs.python.org/3/library/functools.html#functools.r...

[1] https://docs.python.org/3/library/itertools.html#itertools.s...

[2] https://hackage.haskell.org/package/base-4.14.0.0/docs/Data-...

[3] https://hackage.haskell.org/package/base-4.14.0.0/docs/Data-...

[4] https://en.cppreference.com/w/cpp/algorithm/accumulate


> it can be hard to determine what aspects of a given library are the encoding and which parts are the fundamental ideas if you haven't seen the ideas used in well-suited language

That's a good point. Using a proper functional programming language doesn't just enable FP ideas (you can't fake a feature like implicit capture of variables), it may also clarify them by reducing baggage.

> I found `functools.reduce(function, iterable, initializer)` significantly more opaque than `foldl :: (b -> a -> b) -> b -> [a] -> b` because the type signature makes it clear what sort of functions are suitable for use as the first argument.

I suspect you're just a better Haskell programmer than me (I've only ever dabbled), but I find the big-mess-of-arrows syntax to be pretty confusing compared to a simple tuple of descriptively named identifiers.

Perhaps related to this: I don't see the practical appeal of currying. Even C++ supports the 'bind' pattern just fine - http://www.cplusplus.com/reference/functional/bind/#example


A gentle way to get into the "functional" mindset is to write small a script and then use it to process some collection with xargs.

xargs is analogous to map() in this situation, and the script needs to have limited side effects to work well with concurrency. xargs -P4 for example.


Functional programming has been a game changer for me as well and has enabled me to write larger and more complex programs that are easy to maintain and reason around. I highly recommend cytoolz for python


> An excellent list. Regarding functional programming, I recommend starting with a gentle approach that doesn't require picking up a new language:

But at least make sure that your language supports closures.


This! What is funny that is that I started doing this before I knew anything at all about functional programming, I just started to avoid stuff that I had painful experiences with.

Later I read a couple of chapter of SICP and then I really changed and my programming hasn't been the same since. The language I use at work is JavaScript and while SICP isn't for JavaScript, nothing else has changed my JavaScript for the better to that degree.


> while SICP isn't for JavaScript

There's a port. https://sicp.comp.nus.edu.sg/


> 1. Stop creating counters/loops and become facile with map, reduce, and the like. This will shift your thinking away from blocks and toward functions.

I am not very comfortable with this. How can I learn to do this in traditionally non-FP languages like Java? (Am CS undergrad student)


Caveat: I haven't touched Java in years, and that was not even a current version of Java at the time (well, it was old code made to run on the then-current JVM, but not utilizing any features introduced after 2006 or so). I'm assuming these are good resources, but I'm not sure.

https://developer.ibm.com/technologies/java/series/java-8-id...

List of articles relating to idiomatic Java 8 code. Some of these touch on using lambdas and functional idioms.

https://developer.ibm.com/articles/j-java8idioms3/

This one shows a few of the functional-styled methods that can be used (foreach, takewhile, iterate, etc.).

https://developer.ibm.com/articles/j-java8idioms2/

Shows the collection pipeline pattern.

I have experience with the same things in C# and other languages, the way they're using them in these articles are what I'd expect from a comparable API.


I can't speak authoritatively about Java, but it looks like map-reduce is available in Java 8 by casting a collection to a stream [0]. Considering the definitions of map and reduce can help one see how they can replace loops/counters:

MAP: Take a collection, say a list/array or a dictionary/hash, and perform some function on each member of the collection, returning a new collection who's members are the return values for each original member. It's a loop, but no loop!

REDUCE: Do the same thing as map, but carry along an output variable, and have your function's output for each member (potentially) mutate that output variable. Summing is a basic example.

I'm not specifically recommending preferring this in Java as a step towards functional programming. It's in, uh, more terse languages like Python and Ruby where the payoff is obvious [1][2]. And among not-functional programming languages, it's not just dynamic languages, either. Consider Dart (and seriously, consider Dart) [3]. Also, Javascript, which has had many features shoehorned-in over the years, has these and related functions.

[0] https://www.java67.com/2016/09/map-reduce-example-java8.html

[1] Double some numbers Python: result = map(lambda x: x + x, array_of_numbers)

[2] In Ruby: result = array_of_numbers.map{|x| x + x}

[3] In Dart: result = arrayOfNumbers.map((x) => x + x).toList();


One other other thing. Functional thinking has greatly changed the landscape of client-server applications that are hosted in the cloud as well. If your aim is apps, maybe don't bother to master the skills needed to set up and maintain a Linux server (although if you follow OP's other suggestions, you're well on your way). Instead, consider your backend as a network of microservices, functions, that each do one thing and do it with side effects only when necessary. The host for your app? Poof! That's AWS/GCP/Azure's problem.


One other thing. You will be thinking functions first if you get into data science, say with Python/Pandas. In general, Pandas functions are vectorized, meaning that they operate on members of a collection in parallel. You really don't want to write a loop that iterates over some 5,000,000 member collection and applies some expensive function serially.


Thanks.

My school (Macalester College) recently started introducing some fp constructs/concepts in our intro class such as map, reduce like you mentioned.

It was long after I took it and now I am TA-ing. Oddly enough, I am more comfortable approaching this style in Kotlin.


> I recommend starting with a gentle approach that doesn't require picking up a new language:

But then you don't get any of the newer stuff.


I upvoted the parent and want to emphasize vim key bindings. This is not necessarily vim the editor, it's your editor of choice in vim mode. Learning to use vim is like learning to touch type: it's initially a pain, but it's hard to ever go back once you've mastered the basics.

If you haven't learned to touch type (it happens, I didn't learn until I was 22), then first learn that, then learn vim.

FYI: Remap your capslock key to escape to use Vim more effectively.


I recently had a revelation: When typing longer shell commands it can be time-consuming to go back and make changes, turns out you can use Vim style cursor movements within the Fish:

https://stackoverflow.com/questions/28444740/how-to-use-vi-m...


This was a revelation for me as well. I also didn't realize that if you haven't configured vi keybindings, the default is Emacs (in bash or anything using readline). Even though Vim's my main editor, I found modal editing a bit too heavyweight on the commandline, so I prefer the default Emacs (most useful by far: C-b to go back one word and C-k to delete everything right of cursor).


I prefer the emacs bindings for the command line such as C-A, C-U (mostly due to muscle memory), but have set up my $EDITOR as vim. This allows me to do C-X C-E, which opens the current command in vim to be edited.

If you are using zsh, you need to add this to ur .zshrc

  autoload -z edit-command-line
  zle -N edit-command-line
  bindkey "^X^E" edit-command-line


you can do that in bash too, and any cli that uses readline library


  set -o vi
Starts you in insert mode. To go back to the usual command-line editing,

  set -o emacs


also zsh (tho it doesn't use readline per se)


> FYI: Remap your capslock key to escape to use Vim more effectively.

That's more one way of doing things rather than a "FYI". Eg I switched capslock with ctrl. There are many ways to exit insert mode, I prefer:

  inoremap jk <esc>
  inoremap kj <esc>


I use that mapping all of the time, it is also helpful to use

inoremap <esc> <nop> as this helps you train your fingers to stop using <esc>


You don't need to know how to touchtype. Programmers are not a glorified typing pool. Furthermore, my father† has worked every job there is in a world class news organization since entering adulthood and never learned to touchtype. He could type 140 WPM.


Learning to touch type took roughly 10 hours over 2 weeks, and it made my life immeasurably better. Your (dad's?) mileage may vary.


What exactly improved?


Imagine having to look at your mouse every time you clicked on something. Now imagine you no longer have to do that.

If you're really curious, just learn to touch type and find out. It doesn't take all that long if you're already a solid typist. I'd be fascinated to read an article from someone who learned how to touch type and thought it was a waste of time.


OK so terminology: touchtype to me means that you learn the fingering the keyboard in a specific way. Not just that you can type by not looking at the keyboard. I can do that. I just never bothered with the "correct fingering".


For me it improved my physical comfort significantly. In particular it solved my back pain because I didn't spend so much time with my head looking down.


One other method to quickly exit insert mode if you can't remap keys is ^C. I did not know this even though I used gvim on Windows computers not under my control for years.


True and handy. However, Esc and ^C don't have exactly the same behavior -- from the documentation:

<Esc> or CTRL-[ End insert or Replace mode, go back to Normal mode. Finish abbreviation.

CTRL-C Quit insert mode, go back to Normal mode. Do not check for abbreviations. Does not trigger the InsertLeave autocommand event.


Re: i3wm

I installed it a couple years ago, went whole hog, down the rabbit hole, but realized a couple of things.

1. I rarely ever use anything more than a simple L/R split.

2. When I do use something more complex, it's almost always in the terminal, in which case why not use tmux?

These days I'm back to using gnome because ubuntu switched to it over unity (which had a weird multitouch bug that drove me crazy).

What do you get out of i3 beyond a simple L/R split supported by simpler wms and how often do you use it?


While I do use i3 for more than L/R splits with stacking etc. the largest benefit I get from it is workspace management.

I use i3 with polybar and have dedicated workspace icons (web browser, terminal, editor, to-do, email, music, etc) for quick navigation between different applications. Over time I’ve built up muscle memory (i.e $Mod+3 will bring me to my editor) that has significantly sped up my development process. While you could use another window manager for a similar purpose, I find the relatively minimalist approach of i3 + polybar in my case to be fast and highly configurable.


I can echo this! This is exactly how I feel too - the muscle memory around workspace management and the scratch windows (floating windows that you can toggle in/out of visibility based on a single keystroke) are the real boosters for me rather than splits. Splits are useful but the most common use I've seen myself do is to have a browser and a terminal / editor in splits.


> T he scratch windows (floating windows that you can toggle in/out of visibility based on a single keystroke

Us the toggle an i3 concept? I’m interested in it, can you give me the function name so I can look up doco? :)


Not GP but I think it's `scratchpad toggle` - scratchpad is the keyword to lookup anyway.


Thanks, interesting concept that has some valid applications :)


I've got a portrait monitor connected to a laptop, so I end up splitting the monitor's pane vertically.

I treat each workspace as dedicated for a specific purpose - Dev, Browsing, Chat, etc. That gives me quick mnemonics to hop to each space: MOD+1, MOD+2, MOD+3, ...

Within my Dev workspace, I use a tabbed pane for top-level organization: browsers (stacked), IDE, terminal, Emacs (magit + org), etc. This keeps my focus on that space when doing dev, and away from the laptop monitor, which is only occasionally useful as a reference.

I'll occasionally stack a terminal beneath my IDE if the current task requires it, e.g. to test a deployment or a project task.


Off the top of my head, things I use most beyond L/R split include:

1. Workspaces. Not at all unique to i3 but I’ve kept mine themed and automatically load certain apps into the same workspaces—all things i3 makes easy to do.

2. Floating scratchpad for media player. Nice to have my music controls always accessible but only visible when I unhide them.

3. Vertical split beneath my editor with a terminal. Just my personal preference, but I typically have L/R with code/browser and then split the code half vertically.

Being able to move windows across displays and workspaces quickly are other pluses, but again, not at all i3 exclusive.


I moved to awesomewm a few years back and I absolutely love it. I can open half a dozen terminals at once and they'll all automatically be ordered in a way that's immediately useful to me, where all the terminals are visible and (largely, depending on the automatic layout set) equally sized. And if I want another layout, I just press one combination to cycle through all the layouts I've configured. tmux doesn't give me that kind of flexibility. It doesn't feel anywhere as fluid or seamless to switch between half a dozen (or more) terminals.

It means I can have an overview of a bunch of different things and keep terminals context-specific (1 terminal for htop, 1 for docker, 1 for whatever remote test environment, 1 for project A, 1 for project B, 1 for some other remote host I need for some reason, etc.) If I want to do a new task unrelated to anything I'm doing before, I don't need to break the context of an existing terminal, I just press Alt+Enter, it's automatically slotted into a place where it's completely visible and usable and I can do that task quickly. When I'm done, I can close it, again, without disturbing the context of all the other terminals. It's just incredibly freeing to have that and I feel it frees a lot of cognitive load by being able to go back to a terminal for a certain task and immediately see exactly where I was and what I did last.

Also, much like the other comments, I use task-specific virtual desktops all the time. First desktop is for all the terminals. Second is for browser/communication. Third is for project A. Fourth is documentation related to projA. Fifth is projB. Sixth can be more documentation. I often have 10 virtual desktops for different things. I don't want to imagine what it'd look like if I had it all on one desktop.


Sorry to hijack your comment, but do you know of resources I can read to write my own custom layouts using awesomewm? I have a 21:9 monitor, and would like to write a layout where I can have a game running in 1080p + some windows on the side. Currently I do this just fine with floating windows, but surely there must be a way to use the tiling system.


Hah, it's probably possible, but awesomewm's documentation for this kind of thing isn't great. I've looked into it but was put off by the complexity since I don't have that much time to spend on that kind of thing. I did find a couple of links that might be helpful as a jumping off point?

https://stackoverflow.com/questions/5120399/setting-windows-...

https://stackoverflow.com/questions/45411844/before-diving-i...


I depart from that L/R split pretty regularly, especially when coding. I frequently have a single large window taking up half of my screen, and several smaller windows stacked vertically on the other half. The big window is usually the editor, but sometimes the browser if doing web development, or possibly a pdf or something like that. The little windows might be the editor, or a browser, or general command line work, or the stdout of a server or other daemon that I'm working with.

I sometimes do move to more of a tmux split workflow, especially if I'm working on remote machines, but it's just much nicer to have the same keyboard commands for all of my windows.


I always have at least 4 windows open in a project workspace, those being a text editor, a file browser, a terminal, and a web browser.

Currently I use i3 with packages pulled in from XFCE to handle sessions and power management, plus xfce4-appfinder and xfce4-panel (started and killed with MOD keys of course) because I wanted something beyond d-menu / b-menu.

It all works very well and was easy to configure.


Working on high resolution monitors allows me to do LLRR splits. Also the couple second I save from not moving my mouse I think is a worthwhile tradeoff. i3wm's floating mode doesn't really make me _lose_ anything, either. All upsides.


My typical setup is that I have a single desktop for each project I'm working in. Web projects split into 4 panes. Top left, browsers (different chrome profiles running in different i3 tabs), bottom left dev console (same), top right vscode, bottom right, i3 tabs for all the consoles I have open (`npm run`s, git, etc). For go projects I have full left split IDE, ssh in to vagrant top right, bottom right are local consoles (tabbed) for building/running tests/etc.

This would not be possibly if I wasn't using 4k monitors. That was a big shift for me, because now I think of each 4k monitor as 4 1080p displays.


I agree wholeheartedly. Especially when most of the work I do is on a remote server anyway, so I'm already in tmux. I still use i3wm on older laptops just for the battery life gains, but 95% of what I do is Firefox + Terminal emulator, and alt-tab is just as fast. My main workstation is just gnome and it's fine.

(Sidenote: is there any sort of linux libvte-based terminal emulator that has tmux integration a-la iTerm2? For when I do use i3, it would be really nice if I could spawn a new terminal on a remote server, attaching through an existing tmux session.)


Re. Sidenote: It doesn't look like there are many terminal emulators with tmux built in¹, but you can bind a keyboard shortcut to `xterm -e tmux attach'. (rxvt[-unicode] and st also support the -e flag, to run a given command instead of the default shell).

1: https://unix.stackexchange.com/questions/189805/what-termina...


That would work if I was running tmux locally, but what about when I’m running tmux on a remote server?


Maybe use `xterm -e remote-tmux' where `remote-tmux' is a small script in your $PATH similar to https://stackoverflow.com/questions/27613209/how-to-automati... ?


For a very long time I didn't use anything besides just plain vim, 2 biggest things to add to your vim use is undodir and YouCompleteMe. Crazy that I didn't have either of these for so long, undodir I wish was part of the default.


I made persistent_undo as part of Google Summer of Code in 2011. Very grateful I got to do that (and for the mentorship of Bram Moolenaar), and I'm so glad that this became part of your essential workflow. I also can't live without it at this point.


Thank you so much! It is very much appreciated and I don’t think I have any cases where I wish it was better. It works exactly like I expect and does what it says on the box.


Thank you for making it, I can't imagine my life without it.


I switched from vim (with YouCompleteMe) to VSCode about 4 years ago and I've recently discovered the intellisense engine for VSCode is now available as a vim plugin: https://github.com/neoclide/coc.nvim.


This makes me realize I really need to update my .vimrc and some plugins.

I've been toting around the same .vimrc for like 6 or 7 years and there are so many better plugins now.

Vim has been probably the most profitable tool I've ever picked up. Or maybe git. But I think Vim.


I've been using vim for 20+ years and have never been into extensive customizations. That was part of the attraction, because I had to deal with a lot of remote servers, often off-shore. With out-of-the box vi/vim I could get the most done with the least number of keystrokes. Someone made a comment above about IDE editors with vim emulation. Wish every IDE would do that. RStudio for example is not exactly vim, but I find it close enough. If only Spyder and Jupyter would that.


I've used these plugins to get vim bindings in Juypter Notebook / Lab.

Jupyter Notebook: https://github.com/lambdalisue/jupyter-vim-binding Jupyter Lab: https://github.com/jwkvam/jupyterlab-vim


Thank you sooo much!!!


To add to this, vim quickfix-reflector ( https://github.com/stefandtw/quickfix-reflector.vim) was life-changing for me.

It lets you modify and save code from the quickfix buffer, so when you search for something and it shows up in the qf, you can do a find replace / edit / etc. This is especially great for mass refactoring / renaming.


Woah, that's awesome. I use ^f for ack.vim, so combining that with quickfix-reflector sounds superb. Thank you!


By "undodir" are you referring to the "vim-undodir-tree" plugin? Because the "persistent_undo" feature that is built into Vim/Neovim (normally) is what I think of when I hear "undodir".


New(ISH) vim user here. I couldn't figure out how to install YouCompleteMe the other day, but I had not trouble with coc.


> Vim; modal editing and vi-like navigation can blow open your mind. Explore the existing plugins to accomplish everything you want to be able to do. It's all there.

The reason I end up ditching Vim after a few weeks every time I try it (4 serious attempts now) and go back to IntelliJ (which I’ve used for two decades) is that I never found a solution to the following trivial issue:

Imagine you have a large Java codebase and you want to refactor all occurrences of a method called “doFoo()” to give it a better name - how do you do this in Vim?

This is a single keypress in IntelliJ and I use this function very frequently but I never found a way to do it in Vim.

Note: I only want to change THIS doFoo() method, not the hundred other doFoo() methods in the codebase.

Also note: yea, this includes all implementations of the interface, all abstract classes, all classes that extend a a class that implements the interface and all other funky polymorphic things, and NO unrelated code. And do it all in one keypress, don’t have me manually go through line by line.

Any ideas if this is possible now?


If you keep up with the LSP space, this is now possible with Vim

https://github.com/eclipse/eclipse.jdt.ls

https://github.com/georgewfraser/java-language-server

Are both good examples. You'll need a corresponding client like this one

https://github.com/prabirshrestha/vim-lsp

There are others, but this one is pretty good. Next release of Neovim will have one built into the editor. Frankly, it's a bit of a hassle but once you get an LSP provider set up you can get one for just about any language you're using


I am about as hardcore a vim user as they get and I would never edit Java without an IDE.

That being said the first thing I do after installing IntelliJ is open the plugin settings and install IdeaVim.


Im the same, s/vim/emacs/, though I'm also very comfortable in Vim.

I use Emacs for a tonne of stuff, and basically any other language I use. But for Java, with its deeply hierarchical codebases and general verbosity, there's just no sane way to manage that complexity smoothly without a good IDE.

(One could argue that if a language necessitates an IDE to work with it, then that's a failure of the language's DX. But thats an entirely separate discussion.)


You don't need to choose between Intellj and Vim, because you can use Vim keybindings within Intellj.

So you get all the powerful IDE commands and the high speed of Vim commands at the same time.

It's easy to set up !


My life was also improved with some additional aliases in my git config:

[alias] dfif = diff idff = diff grpe = grep


I'm always puzzled how often the wheel gets re-invented ... https://git-scm.com/docs/git-config#Documentation/git-config...


This is only for git commands though? `diff` and `grep` are their own thing.


Not when they're git subcommands, or misspelt git aliases as above.


Years ago Interlisp (does anybody here remember Interlisp?) had a function called DWIM (AKA Do What I Mean). It was an (optionally enabled) part of the REPL. If you typed something that made no sense, DWIM would try to figure out what you really meant to type, and offered to run the corrected command.

I have often wondered why that functionality disappeared, and why no one has tried to resurrect it. Search engines offer corrections all the time; why doesn't bash?


Quaxity quuxity,

Teitelman's Interlisp

Has a DWIM feature that's

Really a screw;

MACLISP has evident

Superiority

Letting its customer

Mean what he do.

--The Great Quux (Guy L. Steele)

This poem indicates the frustrations that hackers had with DWIM at the time, which may explain why no one tried to resurrect it. Too-clever-by-half features intended to help tend to drive people nuts, especially when they fail. Even when they succeed, they interrupt the user's flow and become like that dialog box Windows users just dismiss.



Zsh has this. But it mostly drives me mad.


bashrc snippet (with extra newlines, for folks on mobile apps like materialistic that don't understand code formatting):

    # I do this an embarrassing amount

    alias fgf='fg'

    alias fgfg='fg'

    alias gf='fg'

    alias gfg='fg'


This resonates... from mine:

  alias emcas='emacs'

  alias emac='emacs'

  alias emasc='emacs'

  alias enacs='emacs'

  alias emas='emacs'

  alias emascs='emacs'

  alias eamcs='emacs'

  alias eemacs='emacs'


I have aliased `emacs --daemon` to `emacsd` and `emacsclient -t` to `e` because I use it so much.


You might be interested in https://github.com/nvbn/thefuck


In a similar theme, I'm really glad I added ":q" as an alias for "exit" in my shell.


I must forget to give grep an filesystem path argument at least 10% of the time I invoke it. What I'm intending to do in all of those cases is recursively grep in the current directory. "Warning: recursive search of stdin" might be my most-seen console error message.


This used to happen to me constantly. I fixed it accidentally in switching to Ripgrep [0] which defaults to recursively searching the current directory. Bonus: it parallelises too!

Honourable mention also to FZF [1] which not only makes it trivial to locate a file in a directory tree, but has revolutionised my history use with its fuzzy matching.

[0] - https://github.com/BurntSushi/ripgrep

[1] - https://github.com/junegunn/fzf


Nice! I create two-letter aliases, it truly helps:

gc = git checkout

gs = git status

etc...


Here's mine:

    alias g="git"
    __git_complete g _git # enable git autocompletion (braches, etc.)
    alias gc="git commit -am"
    alias gp="git push"
    __git_complete gp _git_checkout # checkout is more useful than _git_push because it autocompletes the branch
    alias ga="git add -A"
    alias gd="git diff"
    alias gb="git branch"
    alias gx="git checkout"
    __git_complete gx _git_checkout
    alias gs="git status"
    alias gl="git log"


Good collection! I have many of these, plus a slightly longer one for quick fixups that happen all too often:

    alias gcane="git commit --amend --no-edit"


probably you will like

  function fixup() {
    git commit --fixup=$1
  }
  function refixup() {
    git rebase -i --autosquash --autostash $1^1
  }


I have a ton of aliases but I have them all proceeded by an underscore. That way, I don't muck up native commands.

alias _up='sudo apt update -y && sudo apt upgrade -y'


Nice, I have those same aliases.

Also not afraid to add multi-letter aliases if I find myself typing the same multi-word command over and over.

For example git diff master HEAD becomes gdmh


alias g=git

And then define one and two letter aliases for the things you do often:

st=status

l=log --with-prettiness

ap=add --patch

shit=reset

co=checkout


heh, I should make an alias for no-break space (code point 160/00A0) + 'grep', because I type it so often when I pipe and get:

Command ' grep' not found, but there are 17 similar ones. Maybe I'm not the only one :).


I handle this case with auto-correction for all one off errors via <tab>.


> start with Clojure, not one of the less-practical languages.

doesn't expose you to typed functional programming (the ML school) though.


THIS. Especially Clojure. If you want to become a better JavaScript programmer, definitely dabble in Clojure.


for those wanting to brush up on their regex skills, here's a nice tutorial:

https://regexone.com/


The Jetbrains suite. Lightens the cognitive load, makes it easier to refactor and keep code tidy. All of which allow me build better software.

For almost everything else e.g. git, learning how to use the command line instead of a UI is the best way for me to learn how the tooling works.


I'm dependent on Jetbrains IDE for most of my work. It really shines in showing the user best practices and recommendations. A lot of the programming concepts are same across languages and this IDE helps you find the right function/method with its suggestions. And at least for java it'll suggest you variable names, it'll suggest you if a loop can be converted to a stream, it can generate templates for unit tests, it can tell you if a variable might be null, etc. There are a ton of small small features that all add up to a great experience. 10/10 would recommend for a new programmer.


Yes, I would agree with that. If you make websites there is just nothing better than PHPStorm. It is one of the program that just works and makes your life a hell easier. Also a huge shoutout to 'lazyone' on SO for always answering my questions. It's one of the rare companies that actually understand what you're saying instead of just pasting canned replies.

BTW I'm not affiliated with them but they're having a 50% discount sale right now.. iirc 25 birthday sale. They seldom give out discounts, so this is a great time to upgrade too.


Hmmm, I'm not seeing the discount pricing you speak of.


I guess it's only for PHPStorm and ending it about 14 hours from now

https://www.jetbrains.com/lp/php-25/


Oh darn, I already have WebStorm and PHPStorm. Cheers.


The discount applies to renewals too.


Looks like it’s specific to PhpStorm, and not any other JetBrain products.


I agree. I’ve been using IntelliJ ultimate for a few months now and I don’t see myself going back to sublime or emacs. I do work mostly with ruby, typescript, golang and occasionally clojure. I recently found out that with the Ultimate version, you can install all the language plugins and won’t need the other IDE like Goland, rubymine, webstorm etc... They’re just plugins. So all your settings can live in one place


It's crazy how many disparate technologies Ultimate integrates for you. Here's a reddit post I wrote a while ago that lists nearly a dozen different technologies/contexts that I use IDEA for in a single project: https://www.reddit.com/r/java/comments/by2ow0/do_you_use_you...

And that's not the "toolbox" license suite - I do all that for the price of a single Ultimate subscription.


License for Ultimate is cheap enough that if you're going to buy 3 or more of their product, it's better to just buy Ultimate (assuming the tools are provided by plugins in Ultimate).


The way I've seen the 'better software' show for myself is that I see problems that people walk away from that are just too complex or diffuse and their priorities just don't make it worth the struggle.

Tools that eliminate even a little bit of cognitive load moves the point of no return a little bit farther, which means there are kinds of problems you can touch that someone else won't or can't.


Seconded, although to be fair the choice of IDE doesn't really matter as long as it's relatively good. When I moved from vim to Jetbrains, one of the biggest things was seeing all the small errors including spelling mistakes. Being able to easily see and fix minor syntax errors or things like missing variables etc really makes a difference, especially when you are working on a codebase where that was missing for a long time.

If anyone is an emacs/vim user it's certainly worthwhile to enable similar error reporting plugins to get the same effect


The arrival of Language Server Protocol is going to make IDE-like functionality more evenly spread across traditional text editors, "modern" text editors and "IDEs". In Emacs I recommend eglot: https://github.com/joaotavora/eglot


I used to be very gungho on text editors (started my career with Sublime, moved to VSCode later), but I've turned the other way and use Jetbrains products now.

LSP is decent, but I've yet to see any languages with the depth and quality of Jetbrains IDE support, by a fairly large margin. I've had to fight VSCode settings many, many times in to world of Go, but Goland "just works" for the essentials - intellisense and code navigation is just so much better. Python and Typescript are probably the best-supported in VSCode, but they still don't meet the mark. Rule applies doubly so for languages that aren't strongly typed. Breakpoint debugging for code and tests is similarly hands off.

They add all sorts of ecosystem-specific know how to make the experience smooth, e.g. Rails, Rspec in RubyMine.

I still use command line tools for every other part of my workflows, e.g. Ripgrep, git, dependency management, but I haven't found anything else that compares for coding with really excellent intellisense and code nav, other than Visual Studio proper for C#.


Yes I definitely believe everything you say. I will say that currently I'm using LSP (Emacs Eglot) for Python (pyls) and Rust (rust-analyzer) and the difference is (unsurprisingly!) night and day. I love Jedi and I'm sure Palantir did a decent job but... but rust-analyzer on the other hand, I wonder if in a year or two, is it possible that Rust in Emacs might not be so far off the best "IDE"?


Back then (10+ years ago?), IDEs were meant to be bloated and slow having convoluted interfaces and I only used it when absolutely necessary (like Eclipse for Android before Android Studio became the default) but my, JetBrains saves me so much sanity, it changed the definition of IDE for me (or rather IDE was meant to be like that). I only used lightweight text editors before then. (Used SublimeText before the switch.)

Great thing about JetBrains is they work out of the box unlike vim where you need to spend a whole month customizing just to get back on working on the project with 20 random plugins out there and it's still probably only 30% as good as JetBrain's.

VSCode is being developed rapidly and I see many good plugins but it's still quite far behind except for the launch speed. At least it could be seen as a competitor to make sure JetBrains will keep innovating and keep the performance of their IDE sane not to lose customers to VSCode.


VS Code is like 90% good enough for me compared to Eclipse. I was evaluating IDEA and VS Code to switch to and VS Code with its Java extensions was an easy winner considering its launch speed and the responsiveness. IDEA even though tons better than Eclipse still feels clunkier than VS Code.


Unbelievably amazing to be able to shift-click on syntax and jump to the source - even works pretty well in dynamic languages.

I HATE using a Gem then needing to break my train of thought to pull up the official documentation to see what an API interface looks like. Ctrl+click. In and out in 15 seconds.


You just taught me something! I've been using keyboard hotkeys to jump to source. On mac, I can hold down command (probably ctrl on linux) and it lights up like a link, shows the gist, and let's me click to go to source.

This is why I love these sorts of threads :)


Ctrl-B also goes to definition (at least on Linux) if you don't want to leave the keyboard. On definition Ctrl-B lists all usages of the term under caret.


After a while, you'd think why this isn't the default in other editors. (At least VSCode jumps to obvious sources.)


Seconding Jetbrains products. They’re not the lightest weight text editors and have something of a learning curve. However, they have done a lot to improve my productivity


Came here to say the same. It seems to be the most intelligent IDE I have used so far in that it can understand what you are trying to achieve (well, most of the times) and help you write better code. I learnt a lot in Python just by following IDEA's code reviews and trying to understand the rationale behind each suggestion. And it really shines when you are developing in a JVM language like Java or Scala.


Same. I started using it for Java long ago and now just subscribe to the "all you can eat" license [1]. $249/year, and 100% worth it so that I can pop open pretty much anything in a familiar, well-tuned interface.

[1] https://www.jetbrains.com/all/


When I found the jetbrains Ides I really started to enjoy programming again. I could think about what I was trying to accomplish instead of getting bogged down in programming overhead. Also paid for the toolbox, worth every penny.


Interesting that you prefer both extremes of UI design.

Jetbrains UI that shows available options or pops them in real time.

And command line UI; requires reading the docs to learn options, but can be powerful and chained with other commands via pipes, ez automating UI interactions into scripts, etc.


This is myself as well, enjoying the extremes. Having all the whiz-bang guidance is great and really speeds up my work, except when it doesn't work. Then it's great to be comfortable falling back to something rock solid and unbreakably simple. I never have to "fall-back" multiple times in successive frustration. There is one fallback and it ALWAYS works.

To me it's not worth learning the "in-between" tools for the extremely limited circumstances that I'll need something less than IntelliJ but more than vim/bash. I even hesitate to customize my vim much because I need to rely on it as a fallback on nearly any system, and on novel systems I can't rely on my customization.

I'm not dogmatic about only sticking to these - I'm comfortable with VS Code when it's what my employer/workgroup provides, and I'm comfortable with Sublime, as it's particularly portable (can be run off USB). So sometimes Sublime is the fanciest option.

But anything in between JetBrains and vim needs a real "reason" to bother investing the time to learn.


Just downloaded the GoLand 30 day trail. First impressions are I now remember how slow and stuck in goo java based IDEs feel.

What cool stuff should I look at as I am willing to be sold on this.


I have found most Idea performance issues are either, configured Java heap space, or indexing of project files that could be ignored. The default memory settings are generally pretty conservative, some larger projects run into issues immediately. If your system has plenty of ram to spare, I would recommend just giving it a few gig and seeing if things improve.


I don’t notice slowness on it personally. Maybe you are noticing the initial indexing it does. That doesn’t happen often with GoLand.

Make sure you enable Golang modules in the settings also.

Anyways just mentioning some stuff off the top of my head I enjoy:

Debugging is a great experience.

Can find a plugin for most anything, I use the Kubernetes one for syntax completion and documentation. (alt-q I think)

Also you can create .rest files and compose http requests and trigger them right in the files, which I thought was cool.

The documentation pop up by hitting alt-q in general is pretty cool. Don’t have to run over to godoc.

Then most things you’d expect from an advanced ide. Multi line editing... jump to definitions and implementations... Project wide code search and replacement


I'm not sure about Go's specific characteristics, but for PyCharm I love full project semanticly analyzed code navigation and remote step-through debugging. I use visual studio code for most JS stuff, but it's awful for wrangling many files simultaneously and learning a big codebase. Webstorm allows me to search for function calls and other things in the entire codebase much faster. I prefer VSCode's git UI and use both apps.


I can compare it with VSCode. Goland is much better at working with multiple Go Versions, which is a big thing in my daily work. Other things that it does better than VS Code are auto-generating unit tests, refactoring function signatures, better package management support, easier to set-up different build/debug profiles ...


Jetbrains offerings are quite fast. What are you comparing them to?


Probably Sublime text. It's just too fast and I can't switch to anything else. I open heavier ones like Eclipse, VS Code etc. only when I need to refactor.


I notice the same thing, and I enjoy IntelliJ / Sublime / Vim. IntelliJ often feels sluggish.

Maybe this could help: https://blog.jetbrains.com/idea/2015/08/experimental-zero-la...

Here was a 3rd party analysis from 2015, showing a reduction in latency when editing XML files in IntelliJ IDEA from ~70ms with large jitter to 1.7ms with small jitter: https://pavelfatin.com/typing-with-pleasure/#summary

I wonder if it's turned on by default today, 5 years later.

Some other techniques: https://medium.com/@sergio.igwt/boosting-performance-of-inte...



at least for JS, the refactoring tools save a lot of headache. write code before deciding on variable or function names and then one click to refactor everywhere in the codebase.


Goland is a Java-based IDE.


I'm a die hard Emacs use but when I need to bounce around a codebase or refactor, I jump into the various jetbrains tools. Friggin awesome


You aren’t die hard enough.


That would easily be using an integrated REPL.

The more integrated it is (with your IDE/editor) the better the experience and productivity boost.

And the difference is quite large. When you are working with a language that has first class REPL support you start to

- 'get' why Emacs exists

- become faster at writing code in general

- write much more experimental and open

- become more motivated in testing smaller assumptions and asking questions about your or other peoples code

With "first class support" there are three dimensions:

(1) The REPL and the editor/IDE have to be understand each other well.

(2) The language itself has to be malleable and (de-)compose well in terms of syntax and idioms.

(3) many things in the language are first class expressions or in other words: there is a high degree of tangibility in the language constructs.

Most dynamic languages have workable/decent support for REPL driven development so it is always worth testing out.

You find excellent support in: Clojure (and of course other Lisps) and Julia from my experience.


I completely agree with the point about integrated REPL/IDE, and wanted to share some of the combinations I have used in the past, since it can be a concrete getting started point for those who are curious. Some of these are not literally repls, but IMO give a similar experience.

- ClojureScript with Figwheel and the web browser

- Clojure with Emacs Cider, Clojure with Cursive

- R and Rstudio

- Matlab

- ipython jupyter notebook

- Pycharm debug breakpoints that are triggered by unittests (Running the unittest to initiate a python repl at the breakpoint)


What I really love about R/Rstudio is that you can highlight a few lines of code and execute them in isolation


My main responsibility is developing and maintaining a microservice written in Java.

However, one thing I've found invaluable over the years is developing operational tools to support my deployed code, in a language with a built in REPL.

At different times, I have used Clojure and Jython for this (high level way to call my Java libraries, or to invoke APIs over the network), and most recently Ruby (has been easy to deploy and run scripts or ad hoc commands over irb for operational tasks, in the same environments where my service runs).

This allows me to build up code over time that I can use to

* Quickly make calls to my service to triage or debug production issues.

* Write scripts to quickly validate a new install.

* Script operational tasks at a high level that doesn't make sense to build into the service itself (can allow the service to be more stateless, for example).

* Bypass layers and make calls to the underlying database (can be more powerful than the command line tools dedicated to a specific database).

* Can be more powerful and composable than curl or Postman for making web calls.

* Have used it to analyze the state of messages in a Kafka topic (with a custom gem).

So I highly recommend building a tool set around a language with a good REPL for anyone responsible for a service with a REST API, or any other kind of API available over a network.


Have you tried Groovy? Sounds kind of ideal if your main work is with Java. I have a BeakerX (JupyterLab+Groovy) notebook open pretty much continuously next to my IDE while I code so that I can validate all my assumptions as I code. A nice workflow is that you start with a snippet of experimental code, which you tidy up in the REPL, then it splits into the actual function and test code, one part going to the unit test and the other to the real code.


Yes, this is extremely useful, thank you for sharing.

I also often use the REPL as a "tool" rather than just a editor feature.

For example this week I'm working on a data integration. This is a very specific one-time task, so there is currently no need to write accessible production code. I can just use the REPL to do the "ETL" and leave the code as-is.

There is merit in keeping it and find functionality to extract, abstractions/compositions in the future, but the point is that the integrated (Clojure) REPL itself is already sufficient, powerful and very ergonomic tool.


Writing the overall design in plain English before writing the implementation. Not super detailed, but the main data structures and invariants and mechanisms that will make the implementation working. Then start the implementation refining such document as I discover new things.


This, most of all. Substitute native language if not English; the important thing is that the project be defined and developed in both a human language and a computer language, so that mismatches can be identified and resolved.

  * Start by describing what you are trying to do.
    * Specifically.
    * Not 'build a web business to enable users to achieve their
      potential', not 'create another library for X but simpler'
      but *specifically what the software will do* and, most
      importantly, the priorities of scope increase (it'll happen
      anyway; just get the priorities and their order down in
      text ASAP).
    * Put it in a readme.txt or something.

  * For any given subsystem of the code (meaning: any section
    which can sort-of stand on its own) write another such file,
    in more detail.

  * Let these files guide your tests too.

  * Keep them up to date. If priorities change, *start* by
    updating the readmes. The code isn't immutable; nor is the 
    plan. But the plan comes first.

  * When unsure how a new subsystem or feature is going to work,
    write out your ideas and thought processes in text. Don't
    start coding at all for a day or two after you sketch out the
    basics. *Append* to this file instead of replacing large
    parts.
[edit] Wasn't intended to quote that part (sorry to mobile users) but I can never remember how to get bulleted lists on this site...


I found that combining such approach with writing down basic interfaces works really well - after i have a rough written idea i iterate over interface design with full descriptive comments of both the interface, and the methods.


Have any samples/examples you can share?


Things like that https://gist.github.com/antirez/ae068f95c0d084891305. Usually more detailed with data structures, but this one was a very conceptual thing.


Absolutely this. When facing a tough problem it’s a great tactic to write out what you’re trying to solve and how you plan to solve it in prose-style English.

I’ve done it often without ever sharing my writings with anyone at all, and always felt that the code turned out relatively good as a result.

I’d also add that a very prototype that you throw away also helps line up your thoughts.

The key is having something reasonably concrete in front of you that forces you to think of the invariants, compromises, etc in the system. Before making all those decisions concrete by writing loads of code


Leslie Lamport thinks the same:

https://dl.acm.org/doi/fullHtml/10.1145/2736348

Even a rough sketch is good enough a lot of the time:

https://m.youtube.com/watch?v=-4Yp3j_jk8Q


This is the most useful post so far. I take it a step further and make a diagram in draw.io to understand exactly the data that is coming in and out. This is especially important for working with legacy code where you might get random crap like a name instead of an id and that could throw off your design.


here is another useful tool to draw graphs/flowcharts: https://whimsical.com/


This is coming from creator of Redis so we better listen :)

Thank you for your work Salvatore!


Exactly this one.

I don't get why the top comments are all about some technical tooling, as if the major part of a developer job would be typing.


I'm trying to be better at this. Any examples/recommendations you could share?


The way I do it is that I whenever I start working on some functionality and it’s not immediately obvious how to implement it, I open a text file and write down my thoughts as something between stream of consciousness and design document, usually formatted as a multi-level bullet-point list.

I start with what I am trying to achieve and list the different design approaches I can think of, adding advantages and disadvantages of each one as they come to mind. By the time I’ve written down all my thoughts on a design decision, it is often clear to me which approach I favor.

This can be repeated for more and more detailed aspects of the implementation (e.g. “which function should this be added to” or “what to name this function/struct/variable”) until I feel like I can come up with the remaining details as I’m writing the code. If I get stuck somewhere later on, I can always go back and add more details in the text document.

For larger or more important features, this list can be cleaned up and become documentation or perhaps a comment somewhere, but I often find that the writing is a useful tool to get unstuck and to clarify my thoughts even if I end up never reading it again.


Not sure if this is what antirez had in mind, but when I'm working on stuff, I whiteboard (if I'm at home--and who isn't these days) or write in a notebook.

First thing I figure out is how I want to interact with a thing. Whether that's a program or a class or a function. How do I want to call it? What parameters do I want to pass? What do I want it to return? How do I want to use what it returns? So, basically, write the interface first.

If I'm building something with multiple parts rather than a single class or function, I'll map out how these things all work together. A loose graph of interactions. Invoke A and have it return X; invoke B and pass X to it and it returns Y; etc.

Then I'll consider failure modes and think about what should happen if something doesn't work out quite right. Is it possible to route failures to a centralized cluster of error handlers so I don't have to implement error handling at every level?

Finally, I'll think about whether I can map behaviors to defined data structures instead of controlling flow with if/else patterns.

Once I have all that written down or mapped out, then I'll start implementing from the outside in. Stub the object, methods and return dummy data structures that fit until I have a complete system that's interacting the way I want. Then I go in and implement the actual functionality I need.

The last part--implementing functionality--often implies modifications to my initial thought process. But it's easier to understand what those changes affect if you've already mapped your design. So you might think B can produce Y with I parameter. Maybe it turns out you can't. So now you need to add a new param. Where is that going to come from? Well, you don't have to invent that out of nowhere because you've already mapped out what is happening. You know that you need to either add another node to the call graph or change the return value somewhere else.

By the end of the process, you have a working program, and you've also done a lot of your documentation work as well.

Again, I have no idea if this is what anyone else is talking about here. But this is how I personally work. It annoys the hell out of some people. But it works for me and helps me create sane software with interfaces people can remember over time.


Reading official documentation when working with new tools/frameworks.

Googling every hurdle as it comes & over relying on StackOverflow is neither effective nor satisfying. Some of the projects out there have amazing documentation (e.g. VueJS, Kafka). It's best to read the overview & skim high level stuff to understand the core components/principles/terminologies. It makes it so much easier & enjoyable to use those tools.


Why isn't this higher?

Give everything a good read before really working. You don't need to remember everything, but you need to know what's there so you don't wind up reinventing wheels, endlessly googling with the wrong search terms, or doing things people who use the tool correctly find inscrutable. It's so important.

I will note that this is much easier to do well when you're a more experienced engineer than it is when you're just starting out, but getting used to doing it and going back over docs when you do have more experience is the best way to get used to it.


I think the main problem with this approach is the all-or-nothing problem. When reading official docs is not always clear when you already have everything that you need. Reading the whole doc is usually not a possibility considering most modern tools have easily 100+ pages that go down the rabbit hole.

In short, good official documentation is scarce and time to read is even more so.


It is even better if you draw diagrams on dotted paper while you read.


For me, the transition from Bash to Zsh has been a huge efficiency boost. Mainly because of some great plugins for Zsh, such as z, zsh-peco-history (better history search), zsh-autosuggestions, and zsh-syntax-highlighting.

My blog post about setting up a Linux workstation describes this in detail: https://tkainrad.dev/posts/setting-up-linux-workstation/#swi....

The best thing is, there is no initial productivity hit. You don't miss out on any shell features that you are accustomed to.

Also, learning complex IDE features really pays off. At the very least, become familiar with the debugger.

Finally, I spent the last months making https://keycombiner.com/ in my spare time. It is an app for learning keyboard shortcuts and getting faster/more accurate at using them. It already made me more productive because I learned a lot of new shortcuts and found some problems in my typing accuracy.


For zsh, I highly recommend zsh-histdb, it stores all your commands in a sqlite database, along with data like timestamp, current working directory, hostname, session id, etc...

It has its own "histdb" command but the best part is that it integrates well with zsh-autosuggestions, so with the right SQL query you can make it suggest something "the latest command in the current session that matches, or if not found, the most frequent in that directory".

I know it is controversial because it is not using a text file and UNIX loves text files, but it really nice, and you still have your .zsh-history if you want to.


histdb is one of my top favorite tools. And I've come to the conclusion flat files can only get you so far. sqlite is probably the next best thing to flat files, and I think history is one of those things where switching to a db is an immediate win.


I would highly recommend checking out fzf for better history search. Found the recommendation on another similar thread here and from coworkers. It's surprisingly fast and very intuitive.

https://github.com/junegunn/fzf


+1 to this. I've never been a fan of how <ctrl r> works in bash, and fzf makes it soooo much better.


Both this and histdb look quite interesting. That being said, I never had the feeling that I am missing anything with zsh-peco-history.


Wanted to thank you for writing your blog post. I have been on Linux full-time since about july 2019 and found your post some time after that. Your post really kickstarted my productivity in Linux.


Glad it helped you! Thank you for the nice words :)


Today I switched from zsh to fish and I'm already much happier. Was using zsh for 4+ years, too


What benefits do you see in fish when compared to zsh?


I would say fish is to zsh like what zsh is to bash.

More seriously, for a start: good defaults, highlighting and autosuggestion built-in, parameters search with help and completion... (but it's not Posix).


Regarding POSIX, I've been using Fish for about 4 years now.

POSIX always comes up, how it's a deal breaker.I want to mention that I build all my scripts as POSIX as I can or using Bash extensions. You keep having Bash/ZSH on your machine, so you can still use your scripts and don't miss anything. Shebang's keeps working,

    #!/bin/sh
    #!/bin/bash
    #!/bin/zsh
Personally, I actually don't change my default shell (chsh step). I simply set my terminal to use the fish command instead of invoking the default shell.

    - Gnome Terminal, there's a Title and Command tab. You can set a custom command there. Just put the path to fish
    - Terminal.app, Preferences > Profiles > Shell > Run command
    - iTerm.app, Preferences > Profiles > General > Command
    - Tmux, on your .tmux.conf `set -g default-shell /usr/local/bin/fish`
It's more portable for me that way.


I was also interested about it and from my research, Fish has two major differences:

- it enables the cool functionality out of the box, so unlike zsh you don't need to have large configuration file to enable everything

- it is not afraid to break bash compatibility to fix confusing scripting issues, so fish most likely will fail when executing a bash script, but writing scripts in fish should be more enjoyable


I also want to thank you for this blog post. I'm a long time linux user, but just got a new machine and decided to start from scratch rather than try to port over my previous environment. Looks like just starting from your post will save me some time.


Thank you!

I am currently setting up a new desktop myself and will soon update the post regarding Ubuntu 20.04. However, this will only be very minor changes, almost everything still works exactly as described :)


An actual debugger. I started as a PHP/WP dev and spent many hours running results through echo or var_dump. IMO the debugger is the absolute first thing you need to learn about the platform you're writing for. Without it, you're taking shots in the dark and you truly don't know how your code is executing.

It seriously pains me to see people not using one. I have a friend who is taking an online PHP backend class. There was one lecture on debugging, and all it consisted of was "here's what using var_dump looks like". I showed my friend how to actually set breakpoints in their JS code, set watches, etc and they felt cheated by their class. They should.


So much this. And command-line debuggers are usually awful. Just being able to set a breakpoint in the actual text file you're editing without using a different tool, and stop when a condition happens can speed up your workflow so much.


+1 VisualStudio w/ C#/.NET was very eye opening to the power of live debugging code, ability to evaluate expressions and introspect vars

EDIT: I originally said VSCode, but i meant the OG Visual Studio


I've not tried code for .NET yet. I generally like the normal-IDE-ness of Studio


I'm really confused by this. Don't debuggers come embedded into any IDE worth using?

How could someone start programming this century without access to one?


In the PHP world I'd argue most developers don't use an IDE, let alone a debugger. Setting a debugger up with Xdebug or ZendDebugger is also not easy for those less experienced with setting up the actual PHP environment.


Any profiler.

As a development tool: You can default to writing the majority of your code dumb, terse[1], and straightforward, even if you know there's a clever algorithm you might be able to use, because that way is easier to debug. Computers are fast and N is usually smaller than you think, and when you apply the profiler you'll find out that that the biggest performance problem isn't the thing you were going to optimize anyway.

As a product tool: People are more likely to buy responsive programs. The state of modern websites is so bad that non-programmers will actually comment. Every tester for Space Trains[2] commented on how smooth it is. That's a game, but I've seen the same comments on productivity software I've written.

[1] As in omitting layers, not as in omitting descriptive variable names.

[2] https://www.youtube.com/watch?v=LRJP0tie-30


By layers, do you mean interfaces and abstract layers?


Yes.

Some abstractions are good, but I've seen many projects where an abstraction layer or interface is added that does the exact same thing as the code behind it. Or even more often, there will be one or two specific functions in the layer that does actual work, but it would've been fine to just write a helper function for that and not wrap everything else. It's actually pretty rare that whatever needs to be abstracted covers an entire conceptual area to the point where thinking of it as a "layer" makes sense.


TabNine: https://www.tabnine.com/blog/deep (the code are GIFs which I had to click to play)

This thing is fucking magic.

It's ML autocomplete, with help from 'traditional' autocomplete methods that use static analysis. Instead of just completing one token at a time like traditional completers, it can do entire sentences or multiple lines of code in one go, and is freakishly accurate. And since it parses language it helps you write comments, and can understand relations between code. E.g. if you are writing 2d movement code and you do x += dx it'll automatically suggest y += dy for the next line based off of previous similarities; of course if you have x += [complex math formula] it'll fix it up for y and convert cos to sin, etc.

Support for many editors, and easy to install in vim. Free for personal use. Works for all languages, including plain English (and maybe other non-code languages?).


I tried this when it first came out but it didn't seem much better than PyCharm's usual suggestions (which are admittedly excellent among its IDE peers). I rarely to (maybe never?) saw it do any multi-line suggestions, let alone accurate ones. It was also very slow to suggest anything in the first place (I believe the network calls were slow iirc, at least compared to local/native autocomplete)

Maybe its progressed and I should try it again today.


Not sure when you started, but I've been using tabnine in Pycharm for a few months and it is absolutely mindblowing. I've had long line autocompletes (no multi-line) and often time it suggests things I may not have thought of, "now that you mention it, I DO want that idiom". It's snappy enough for me and I am not exactly a patient individual.


I love TabNine but had to stop using it because each instance can use 3GB of memory... Way too much for an autocomplete extension

https://github.com/codota/TabNine/issues/43


It's only too much if you need that memory for something else though. If it manages to be responsive with that memory and you only have 1 ide open, I don't think 3gb should be a problem on a modern system.


It always starts with this premise, suddenly every application no matter how silly demands 3gb. I understand that progress often happens by putting more attention to other aspects than economics, but again some people may value that progress less.


It uses _so_ much memory. I recently added an extra 16gb stick to my laptop, maybe I'll give it another try.


Disappointing to see the pricing model has changed; there used to be a license for unlimited project sizes, but now appears to be a $15/mo sub for their paid service.

Understandably, this was before they transitioned from on-device models to more complex, larger cloud models.

The free 400KB limit is quite generous, but you may need to spend time tuning the ignore if you have junk in your project folder.


Similarly, Kite [0] for Python and JavaScript. I actually prefer Kite to TabNine, but ymmv.

[0] https://kite.com/


Kite got criticism for tracking and injecting ads.

https://qz.com/1043614/this-startup-learned-the-hard-way-tha...


strace.

Even after having learned many programming languages and contributed to various projects, it was only when I started using strace that I felt like truly, efficiently understand what any program does, and can reliably write programs that do things fast.

I believe that "syscall oriented programming" (making your program emit exactly and only the right syscalls) results in clean, understandable software.

Now I use strace every day, as it is often the fastest way to figure out problems with any tool, written in any language, open-source or proprietary.

- Something hangs? strace shows how it's hanging.

- Computer is slow? strace will likely show who's the culprit spamming syscalls.

- "Unexpected error occured"? Bypass programmers doing poor error handling, and witness the underlying "no such file or directory" directly.

Last week I even used strace to debug why my leisure-time computer game wouldn't load a mod, and as usual, strace did the job.

strace helps to really understand computers.

If you want to learn more about strace, check out Brendan Gregg's pages like http://www.brendangregg.com/blog/2014-05-11/strace-wow-much-..., my presentation for an intermediate-level example (https://news.ycombinator.com/item?id=16708392) or my project to build a programmable strace alternative (https://github.com/nh2/hatrace).


It's so disappointing that dtrace is neutered by System Integrity Protection on MacOS. When I want to do this I have to stop and transport my workload to a server or VM, which may or may not reproduce the issue.


strace is the first thing I skimmed down the thread for. You can learn a lot about how things work (or aren't working) by getting really familiar with an strace. Some of my coworkers give me grief for how easily I jump to "let me see an strace" but it works.


Do not use strace. Use sysdig which is superior in just about every way.


sysdig is useless without installing nontrivial performance-impacting instrumentation, cannot handle non-IP networking, does not fully report all syscalls, has a license with patent crap in it, has gated features behind a paywall, and cannot inject syscall faults. It's not even in the same class of tool as strace at this point.


> sysdig is useless without installing nontrivial performance-impacting instrumentation

Most reasonable people reading this sentence would come away with the conclusion that strace is fast, whereas sysdig has some inherent overheads. In reality it is strace that has performance and other problems which make it completely unsuitable for production use (strace will slow syscall heavy code down by a factor of over 100; sysdig won't). Sysdig, on the other than can definitely be used in production and I always found the performance overhead minor. Can you point to something showing otherwise? BTW, newer versions of sysdig do not require a kernel module, thanks to eBPF (but I have not used this).

> , cannot handle non-IP networking,

What is an example of a networking related query you can do with strace but not with sysdig?

> does not fully report all syscalls

Can you expand? Are you referring to the fact that sysdig will drop traces if the userland client cannot keep up (which is a feature and not a bug, and something that all production grade tracing tools do)?

> , has a license with patent crap in it,

As far as I'm aware sysdig's core is Apache licensed and the user scripts are MIT and GPL licensed. Apache has a patent grant, which seems better than not having one. What is your specific beef?

> has gated features behind a paywall,

What features that strace offers are behind a paywall in sysdig? What's wrong with a company that provides a tool that massively advanced the (linux, pre-eBPF) state of the art as open source for free to all also provide some paid offerings on top?

> and cannot inject syscall faults.

This is indeed a useful recent-ish feature I did not know about so thank you! But there are other ways to do it, and something that's orthogonal to the core tracing functionality.

> It's not even in the same class of tool as strace at this point.

Indeed -- the only reason to use strace at this point is because you already know it and it is likely available. This may change if strace switches away from ptrace, but for now it is a joke. If you want something that just does strace, but much better (minimal overhead, powerful and intuitive query language with CLI autocompletion) use sysdig. If you want to use the most general and powerful tool that can tell you lots of other stuff besides syscall usage (but has a much worse UX) look at eBPF and perf. If you want to be a serious performance engineer or similar you will have to learn it, but I suspect for most people sysdig has the best ROI. Perf and dtrace are both (far) more versatile but, IMO, (far) less pleasant to use.


If you consider processes as tools, there's one that I suggest to junior programmers bucking for responsibility/promotions.

Twenty minute cleanup. Nobody is really going to notice if you spent 5 hours or 5:20 on a task. As you're closing up and getting ready to push your changes, look if there's anything you can do to make it look or work nicer.

Eventually you start incorporating some of the lessons learned doing this into your implementations.


+1 to this. I've developed the habit of reviewing my PRs before publishing them. When you assume the role of a reviewer, you end up catching a lot of little (and sometimes big!) stuff, reducing the total turnaround time.


+1 to self code reviews, I almost always find stuff I've missed or could have done better.


I often comment on my own PRs to explain alternatives or tradeoffs I considered. These aren't necessarily worth capturing in permanent documentation or TODOs, but can share knowledge or build confidence that I've considered various angles that might come up in a review.

I'll also call out places where I'm not happy with the implementation, looking for feedback, etc.


For many of the tools that have improved my productivity, it was not the tool that was the breakthrough but the realization of the tool’s value. For example, version control has existed since the beginning of time practically, and I begrudgingly used RCS, SCCS, VSS, and probably other version control systems for ten or fifteen years until I had that Eureka! moment (coinciding with Git’s release, roughly) that inspired me to actually embrace version control tools. A similar experience happened with automated testing: I’d gotten the testing-is-good bondage and discipline spiel many times, but it wasn’t until I started writing extensive units tests for language parsers that I realized how wonderfully empowering they can be.

That said, along with Git, I’d list Gdb (or LLdb or any real debugger), Emacs keyboard macros, Python’s venv facilities, and Django’s database migrations among the tools that changed my life.

Somewhat consistent with the it’s-not-the-thing-but-the-realization-of-the-thing’s-value theme above, I’d say reading the Practice of Programming back in ‘99 took my programming productivity to a new level, because it made me realize that one of the central tasks of an abstraction builder is creating a language that allows you to express thoughts in terms of that abstraction. Once you’ve done that, you “just” need to implement the language and all of the problems expressible in it become easy, even trivial.


Using multiple programming languages.

Using Golang has helped me create better data structures, and using C helped me understand linking, using python helped me understand closures, and using Ruby helped me understand that I hate programming.


can you elaborate on ruby? i'm curious to understand your experience with it and what brought you to the conclusion that you "hate programming".


I'm not the original person so I have no idea what their experience is, but I feel pretty much the same. Every time I work in a language other than Ruby, it feels like programming. When I work in Ruby the language feels almost effortless. I wish I could do everything in Ruby and it frustrates me when there is something I can't make work in Ruby. Especially Rails, everything that works works so smoothly. When it doesn't work I get very frustrated and switch to another language only to find out its even harder in another language.

I don't like programming. I like it when the computer does what I tell it to do. I find that drastically easier to accomplish in Ruby (especially Rails).


What are your thoughts on Crystal?

See: https://crystal-lang.org/


Would you like to pay now (compile time) or later (runtime)?

We have ways of dealing with scaling of runtime in production (develop infrastructure environment).

There's no way I know to speed up the edit/compile/run loop during development.


It's ruby but with extra steps


Could you expand on the Ruby part? I'm not sure if you hate all programming or programming in anything other than Ruby.


you should take rust for a spin :)


The vi mode for Bash. Blew my mind when I discovered it and it probably saved me hundreds of hours already. I used to have multiple copies of this cheatsheet [0] at my desk for every new developer I would see editing a terminal command with the left and right arrows.

[0] https://catonmat.net/ftp/bash-vi-editing-mode-cheat-sheet.pd...


I dunno, I don't think it really gets you more than just adding some basic mappings

    # mappings for Ctrl-left-arrow and Ctrl-right-arrow for word moving
    "\e[1;5C": forward-word
    "\e[1;5D": backward-word
    "\e[5C": forward-word
    "\e[5D": backward-word
    "\e\e[C": forward-word
    "\e\e[D": backward-word

    ## arrow up
    "\e[A":history-search-backward
    ## arrow down
    "\e[B":history-search-forward


All of those require me to shift my hands away from the home row of the keyboard though. The real magic of vi-mode is that everything you need is right there under your fingertips 100% of the time. Well, except escape, but that is why I map caps-lock to escape ...


Ah, interesting. That's something I've never even considered before. My hands seem to just move naturally back and forth without thinking. I know that people have brought up having to move back and forth between keyboard and mouse as being a pain point, but never thought about having to move out of the home row as one as well. For me, moving back and forth between keyboard, touchpad, mouse just seems second nature. I do wish I was better at dual-wielding keyboard and mouse, though, so I've been looking into mirrorboard.


Holy shit. Thank you.


magit (https://magit.vc/) - a git interface for Emacs. Hyper-interactive and ergonomic, feels like vim for git. Highly pleasurable to use and makes you significantly more efficient.

SLIME (https://common-lisp.net/project/slime/) - a Common Lisp development environment, also for Emacs. Comes with a REPL, interactive debugger (way better than gcc), the normal IDE-like experience you know and love, and this fantastic tool called the Inspector that is basically an interactive, modal editor for your program data. The Inspector is one of the most novel and useful tools that a development environment can have... and I've never seen another IDE that has anything resembling it. SLIME gives you a highly interactive and fluid development experience that I've never seen anything else come close to.

Spacemacs (https://www.spacemacs.org/) - a starter kit for Emacs that gives you a sane out-of-the-box configuration and an optional pre-configured Evil (vim keybinding emulation) setup. Much more flexible and powerful than Vim (elisp, while a bad general-purpose language, runs circles around vimscript) and much better ergonomics than vanilla Emacs (Emacs' mode-less interface is just straight-up worse for your hands than Vim's modal interface).


If your after specific tools in a workflow...

- the :normal command in Vim and Evil. - learning to use tags to navigate code.

But more generally learning how to take something seemingly complex like Linux or Git and then delve in and read the code and understand how it actually works. Learning to read good technical books and manuals and understand how something was designed and how it was designed to be used.

Colleagues think I work magic; in reality I'm just as thick as they are, I just RTFM.


hahaha...that's my 'secret'

recently I've been paralyzed by the amount of development in ML/DL/RL. Often I have to step back and remind myself to focus on the fundamentals.


I'm going to re-interpret the question more broadly than just "tools" (unless you consider a technique to be a kind of tool):

* Taking good notes

* Writing good plans, good documentation

* Sharing updates and coordinating with the right people at the right time

* Understanding an unfamiliar codebase

* Test frameworks

* Different design techniques (pure functions, dataflow programming, etc)

* The terminal and related features (like emacs bindings, or middle-click to paste last selection)

* Firefox/Chrome devtools

* Emacs keyboard macros


I can no longer count on my fingers how many massive coding efforts from our backlog evaporated into nothing because we sat around and thoroughly talked through the actual business cases.


Spacemacs!

But also, writing the documentation as one works through the problem, either in org-mode, or in a wiki.

The older I get, the less I remember, so the documentation is key.


I'd love to get better at quickly understanding an unfamiliar codebase. Do you have any resources I could dig into?


Most of my productivity gains these days has come from aligning myself and my workstation.

- i3wm (https://i3wm.org/) - particularly getting comfortable editing the .i3/config. It's the most significant productivity change I've had since switching to Linux from Windows.

To get into it, I highly recommend this 3-part video series from Code Cast: https://youtu.be/j1I63wGcvU4

- yadm (https://yadm.io/) - a dotfile manager. It's essentially a git wrapper, but it's allowed me tons of freedom tweaking my setup without worrying about persisting it.

It supports encryption and switching file versions based on which system you're on.


fast switching between workspaces + efficient use of screen real estate + great hotkeys make i3 truly great, I have used it for many years and don't want to miss its minimalism


IntelliJ IDEA has taken refactoring from being a chore to being trivial, consequently making me much more likely to clean up ugly code when I encounter it.

Black, the Python code formatter, means I don't have to spend a single brain cycle on styling the code.

mypy (with a strict configuration) has forced me to think about types, making for code which is much easier to follow and integrate with. No more `if isinstance(list, param)` or similar faff.

Bash, even with all its footguns, is a great lowest common denominator for getting a vast array of stuff done.


Do you have an example of the configuration you use with mypy? (If you could post it in a gist I should would appreciate it! I've done a couple mypy tutorials but I haven't pulled the trigger on adding it to my projects.)


https://gist.github.com/l0b0/b655d155c5cfc509c339cd0b1cd494c...

Beware that if you're not yet using mypy I would not recommend starting with this - it's a lot of work to get to the stage where this succeeds. I'd recommend starting with no configuration at all and adding one setting at a time so you give yourself time to learn what they all do and how they help.


I have a coworker I pair program with a lot who is a member of the VIM religion. But when we need to refactor something it's "let's use you editor (GoLand) for this".


Jetbrains IDEs have been mentioned already, but I'll add a specific feature of Jetbrains IDEs, the Productivity Guide[0].

This contains a list of keyboard shortcuts and tracks your usage or non-usage of them, identifying various ways you can speed up your workflow in the IDE using keyboard shortcuts instead of the mouse.

Building on this, the Key Promoter[1] plugin will notify you when you could have used a shortcut instead of the mouse.

Reclaim seconds of your life wasted moving the mouse around.

[0] https://www.jetbrains.com/help/idea/productivity-guide.html

[1] https://plugins.jetbrains.com/plugin/9792-key-promoter-x


Interesting...

Does something like this exist for Excel? Asking for a friend (seriously).


The static analyzer. There are so many subtle things that aren't caught by the compiler and don't cause any problems in your testing but will definitely come up once 10M users are running your app. The static analyzer can catch some of them before you even run! It's great.

The various sanitizers - Address Sanitizer, Thread Sanitizer, Undefined Behavior Sanitizer. These all also find things before you ship. They require (re)building and running but it's worth it.

Beyond that, any sort of instrumentation that can show you in real time information about your running application. You can see memory growing, even if it's not leaking. You can see performance tanking and know the exact function that's causing it before you quit.

Beyond that, for me using a framework that was well written made understanding architecture much easier. I had used MFC and CodeWarrior PowerPlant for a few years. They got the job done and were easy enough to use, but I didn't really learn anything from them. But when I moved to using Cocoa, it was so elegant and the separation of concerns was so good that it clicked and really taught me a lot about better architecture.


PlantUML[1]: Drawing any diagram helps me to have something concrete as it makes me think of the various components, connections and the invariants. It also gives a bigger picture.

It is also very easy to learn and can be embedded anywhere rendered by Plant UML server or the rendering can be handled by your internal plant UML service.

[1] - https://plantuml.com/


There are several VS Code plugins for PlantUML and Graphviz that allow for live rendering. That has proved very useful for communicating ideas without needing a drag-and-drop interface, and the result is a file that I can drop into a git repository.


There is a JetBrains extension too!


Choice of editor is subject to taste, but I find VI/Vim keybindings and modal editing an absolute must-have for productivity.


+1 and a recommendation for vimium browser extension.

basically, whatever allows increased rate of I/O between you and a computer.

it's really as simple as never leaving the home row and im pretty close to thinking that the mouse as an input device has held people back.


+1

When you’re on it, also install Vimac [1]. I’ve mapped it to ⌃Space (similar to Spotlight’s ⌘Space). It’ll make you grab the mouse / trackpad less often.

[1] https://github.com/dexterleng/vimac/


Yes, I learned vim recently because I was far-away with a shitty netbook and a project fell into my hands. Having a shit computer, a modern dev environment wouldn't work, so I learned a lot of vim while coding it.

Nowadays I'm using VSCode because it has many many features not available on vim (or available with plugins with a large learning curve that I can't have right now) but I always use vim mode. I can't imagine not being able to navigate like vim and have now added vim-like bindings/commands to some other programs like Firefox.


I agree. When first learning Vim the learning curve is very steep for what feels like basically advanced cursor moving but with more experience it becomes a very powerful text editor. Last week another I had a 10,000 line file that was the output of a bash command with two columns and each column was in quotes. I needed to remove the first column and the quotes around the second column. There are many ways to do it but all I had to do was open up vim and type `qaA<backspace><esc>T"d0jq` and `10000@a` and cleaned it right up.


As an opinion, https://vimvalley.com/ was well worth the investment.

I use https://www.spacemacs.org/, but modal editing is a win.


For those non-viers out there, the analogous option in non-modal editors like, say, sublime, would be `select all => split into lines` or `column selection`.


- rr, the time-travelling debugger[0]

- intellij, or any good ide. seriously, we underappreciate everything these tools do and can do for us.

https://en.wikipedia.org/wiki/Rr_(debugging)


Better at coding and better Programmer not always converge.

To become a better programmer, I find _testing_ is a big booster. Not only it helps demonstrate compliance to specs, it also adds more confidence that you're in control of the code. It let's you experiment and change approaches.

Also I find a good help from _IDE_, even a simple as Geany [1], that lets me jump between the functions and also hint at the args. It saves time, and, again, adds confidence when exploring/extending new code.

And finally, the most imporant booster - _learning my limits_.That is when it's time to ask for help and not feel too defensive about peer review. This goes hand in hand with knowing what a good team is. Not just wishing for it.


My biggest improvement came from eliminating a tool. It was when I stopped using debuggers (except maybe once a year) and invested more time in creating efficient runtime error checks and internal self-tests. Debuggers make me lazy. I find I can solve the vast majority of my bugs more quickly with a little thought rather than running to a breakpoint and examining things.

Most useful tools:

- tmux

- vim

- catch (https://github.com/catchorg/Catch2)


Lately, using Jupyter notebooks. It's a paradigm change, and pretty close to what I would imagine Donald Knuth had in mind with literate programming. Just for example, being able to see the shape of variables and having that output stay there between coding sessions is a huge time saver. Add being able to plot data and using other visual tools right inside your code and well, I could go on and on. Just try it. It's like a REPL but with memory.


How do you integrate it into your workflow? Is your program scattered across blocks in a notebook or is it easy to pull everything together once it works the way you want?

Also, thoughts on jupyter vs co ok colaboratory?


Late response, but in case you are still looking I don't think I have enough experience just yet to talk about my workflow. For the moment, I'm mostly fine with just running my entire notebook. And yes, my program is "scattered" across cells in a notebook. And that's useful. The blocks are the small units I would want to debug, or to document with markdown above the block.

However, you might look to Jeremy Howard of fastai [1] fame for some direction. He apparently built the entire fastai library using notebooks and nbdev [2] which allows you to "add #export flags to the cells that define the functions you want to include in your python modules."

I have tried out nbdev in my free time, and it's pretty cool.

[1] https://www.fast.ai/ [2] https://github.com/fastai/nbdev


Not who you asked, but I use it to test things out. You can export the notebook to code.

You can comment out code or refactor then on your IDE or just copy & paste once it's working into a function/method.

I also know some people that just need to run things every now and then and they just launch the notebook and run it.


Resharper (and now all the various Jetbrains tools) - from the early days, this tool allowed you to aggressively and automatically refactor code. Before that, in the C# world ( and many other languages ), there was very little in the way of automatic refactoring tools. Not to mention it's automatic suggestions on code transformations or more efficient use of the language are super useful.

Vim Bindings - While I often don't use Vim, I use Vim binding in most any environment I use. I don't know if it really made me a better programmer, but it improves the experience of programming.

Unit Testing tools - making code testable makes code modular.

Visual Studios Debugger, fantastic debugging tool from years and years ago. While I don't use it as much anymore, it's been a useful tool in understanding code.

Google Search, transformed the way to get information, coding information was hard to come by a few decades ago, now it is prolific.


I prefer to internalise useful ways of thinking rather than leaning too much on tools, on the grounds that the latter are easy-come-easy-go while the former can last a lifetime.

Learn to explain what you're building, why, and more-or-less how it works in a context appropriate to the listener.

(Related: write things down. Again and again, differently, until you understand them.)

Problems and their solutions usually have similar structures.

Any fix right now might save a lot of money. The right fix next week could save years. Often both are appropriate, but the latter is nearly always most valuable.

The world does not change nearly as fast as your competitors want everyone to believe it does, but people's beliefs can.

A tool which warns you of possible mistakes before consequences occur is always more valuable than a tool which tries to guess what you meant and does that instead.


If you think of languages like tools I find Haskell particularly productive. The type checker gives me a whole program view of every branch and I can interactively query my program thanks to strong inference and good tooling.

Mastering a good emacs setup has boosted my productivity with git and having OS-agnostic tooling for linting, debugging, shell, document publishing, email, etc.


- git. Not just commit/push/pull on a single branch - but getting really good at using git is a game-changer.

- vim. I started off in linux sysadmin land before pivoting into more regular software engineering work, so I was very good as basic vi(m) stuff because it's ubiquitous in linux/unix servers. However I've learned its just as valuable as a full-blown IDE/dev environment if you're willing to invest in learning and configuring it. I think newer editors like Atom/VSCode/IntelliJ are still very valuable and great tools, but Vim really is incredible.

- An understanding of unix history. Maybe it's just the type of work I do, but knowing why things are the way they are is incredidbly valuable.


Regarding understanding unix history.

I actually think that knowing the history of whatever you're working with, is good and really valuable.

If you don't know the history, you don't know how things came to be and why things are the way they are. This applies to language, code, product, organization, tools, etc.

You can treat things like a blackbox and use them. They'll work for a while but every time you see something that doesn't work, you'll spend time and more time trying to figure things out. Do/Learn things once.

Ask questions. Sit with people that know. Ask smart questions - http://catb.org/~esr/faqs/smart-questions.html

I've been using git for years. But I decided to really dig into it to find out how things work. These are good links and there are actually many more good ones about git internals out there.

- https://git-scm.com/book/en/v2 - https://git-scm.com/book/en/v2/Git-on-the-Server-The-Protoco... - https://git-scm.com/book/en/v2/Git-Internals-Plumbing-and-Po...


- Version control

- Unit testing

- Leak detectors (for languages like C/C++)

- Race detectors

- Auto-formatting (eslint, gofmt)

- A solid editor (prev: vim, now: vscode)

- One-step deployment scripts/automation (even for small projects)

I have not found a lot of value in debuggers, outside of after-the-fact debugging (e.g., core files.) I much prefer printf-style debugging.


> I have not found a lot of value in debuggers, outside of after-the-fact debugging (e.g., core files.) I much prefer printf-style debugging.

Have you heard of/tried rr? I think you might find that it fits your thought process better than traditional debuggers.

https://rr-project.org/


Never heard of it, but I'll take a look. Thanks for the pointer. :-)


Would you go into details about the one step deployment?


I usually have a shell script called 'deploy.sh' that does a bunch of git, test, deploy stuff. So part of my development workflow is calling "deploy dev", "deploy staging", and/or "deploy prod" to push any changes to the dev/staging/prod environments.

The script makes sure changes are tagged, committed, and pushed, tests pass, builds and pushes container images, static files, and then rolls out binaries. The rollouts depend on the infrastructure -- it could be something like scp + kill/restart, or a kubectl command, or "gcloud run deploy", etc.

One-step deployments vastly improve the quality of the service because I can test things in real environments quickly, push out patches and bug fixes quickly, keep updates small and bounded, and bonus, I can come back to a codebase months later and not have to re-learn how to roll something out.


For a while I used pm2 to deploy stuff. It's honestly a nifty little tool but I'm moving away into a docker based solution.

Initially I was thinking into having a CI build the images and push it to the server and so one but for my personal projects, it's too much. So I might just do a bash script to build the image, push it to the hub and then connect to ssh to pull the image and restart.

Regarding your script, could you share a template?


Learning functional programming will make you a better developer in all languages IMO. This is the single biggest vector for understanding how to write maintainable code.

Similarly learning design patterns will probably make you a worse programmer but learning the heuristics behind them (eg composition over inheritance) will make you a better programmer.

Learning DDD will make you a better programmer by understanding how to use bounded contexts to manage complexity as software grows.


- tests in watch mode. Either with --watch if I'm doing js with mocha or inotify if I'm doing another language. Tests or compile should execute on save, which in turn I configure in autosave.

- making sure that I can execute everything from the console. It makes tickets reproducible, for my future me and my coworkers.

- go. Error handling in go has very bad press, mainly because it's boring and repetitive. But one builds the habit of handling all cases.


`fswatch` is a cross-platform way of monitoring file system changes, versus `inotify`. I use it for testing in the same manner as you describe, very helpful to have one tmux pane with code and another constantly showing the results of test execution.


I'll add two really simple, but also very effective tools. Search: in the form of things like "the silver searcher" (https://github.com/ggreer/the_silver_searcher) or "fd" (https://github.com/sharkdp/fd). And, Replace: with tools like sed (https://www.gnu.org/software/sed/).

These are really simple tools but they are also very powerful. I have surprisingly often needed to change from version X.Y.Z to version A.B.C in some set of files (be it package.json or requirements.txt or *.csproj) and tools like these make it happen in an instant.

Most recently they have helped me with some spaghetti code where the software is used in several places and I need to ensure exactly which places use the KerfuffleManager. Can I trust my IDE to find all places where it is used? Maybe, but with proper search tools it stops being a trust thing.


In the same vein as silver searcher, I'd mention ripgrep: https://github.com/BurntSushi/ripgrep


Vim. I felt like I was slowly going crazy due to the friction between my thoughts and the commands needed to navigate through the code. I was just looking for a tool to solve that and happened upon an article about Vim. A teacher had tried to show me Vim in high school, but I slowly backed away from his crazed stare. But now, I came to it of my own volition. It's great. It feels like the one tool which allows me to freely express what I want without getting in the way. Add to this a suite of simple yet powerful tools, such as recursive macros, which mean I can usually automate tedious and repetitive tasks.


Emacs. Specifically Doom Emacs.

https://github.com/hlissner/doom-emacs


Emacs. Specifically vanilla GNU Emacs.

https://www.gnu.org/software/emacs/


Shell, awk, sed and textutils. They teach that programming is everywhere.

I have trouble understanding professional challenges of people that never had a chance to learn those. Like accountants struggling to merge tables or proposal writers painfully tuning document layouts.


Any recommendations for resources on learning more about these commands?


Shell - for bash, I'd recommend the Bash Guide from Greg's Wiki (https://mywiki.wooledge.org/BashGuide) For awk 1. Chapters 1 and 2 from the Awk Programming Language(https://ia802309.us.archive.org/25/items/pdfy-MgN0H1joIoDVoI...) 2. Awk one-liners (https://www.pement.org/awk/awk1line.txt) 3. Explanations for one-liners (https://catonmat.net/awk-one-liners-explained-part-one)


https://www.grymoire.com/Unix/index.html Have used this for Awk and Sed.


I install the largest whiteboard I can find. My current office has one that is over 2 meters across. Having a huge space to discuss architecture is valuable.


I second this. Also, whiteboard notecards and neodynium magnets.


- Solid editor: I've used a few editors and IDEs in the past. I think a learning to use a programmable editor is a better investment than learning to work with an IDE. IDEs are generally single purpose: working very good for a limited selection of languages, platforms, etc. Having a good editor at your disposal allows you to manipulate any text oriented (and sometimes binary) format. When your editor is programmable and has an active community, you can probably find a plugin to solve your problem more effectively or write something yourself, with your editor! With all your customizations in code, you can manage your editor in the same way as you manage your products: source control! This gives you the possibility to improve your daily working environment reliably and iteratively. IMO, Vim and Emacs are both very good options. Both were created decades ago and still have very active communities. Personally prefer Emacs and use vim on occasion for remote work.

- Learn basic vim and emacs keybinds: most interactive shells use readline (or similar) library. These shells often have emacs keybindings by default. Vim is often the alternative option. This was an eye opener while learning emacs: suddenly I started to understand how to efficiently navigate any shell, even remote systems, without configuration!

- Tcl: Easy to learn, hard to master and powerful! Syntax and semantics can be explained in 12 bullet points[1]. Bundled with the native tk library, its my goto language for quick-n-dirty GUI tools and prototypes. The "every thing is a string" principle makes it extremely flexible. Apart from quick-n-dirty, it's also used for stable tools such as expects, gitk and environment modules. Even sqlite and redis started as tcl scripts!

- POSIX shell scripting and core utils: write once run anywhere.

[1] https://tcl.tk/man/tcl/TclCmd/Tcl.htm


Tcl/Tk is still the fastest way to get from zero to functional GUI that I've ever seen. I wish I could say there was something of similar ease in Lisp, but alas there isn't -- not in the current ecosystem of Unix workalikes running on generic CISC/RISC hardware. Maybe the Lisp Machines had something.


> Tcl/Tk is still the fastest way to get from zero to functional GUI that I've ever seen.

PySimpleGUI seems pretty great for that.


PySimpleGUI wraps tkinter and thus ultimately Tcl/Tk.


leetcode.

I hate I have to do these stupid exercises to be able to get a job, but I have learned some useful skills by looking at how I write code and comparing to faster solutions. It has changed how I write normal code, and helps me write automatically without thinking too much.


For me it’s profilers. A lot of people keep optimizing their code without actually having hard data. Being good with a profiler has often allowed me to go in and get significant improvements quickly because I could attack the code section that really took a long time vs the code sections where think the time is going.

Also having detailed logging with time stamps helps a lot in identifying where the code spends most of its time.


Regular scheduled 1:1 meetings with those I work closely with. Does wonders for teamwork / relationship building.


That's easy, every time I did something new I learned a lot.

First time

+ Linux.

+ Shell scripting.

+ Grep, regex

+ Embedded development, configuring ADC hardware, e.g.

+ FPGA, designing FSMs, adders, etc.

+ Reverse engineering anything. Use a logic analyser. I just learned the details on HDMI a week ago, for example.

+ Vim, and sticking to it in the beginning. :-)

+ Basic sysop stuff, ssh, rsync, dig, etc.

+ Cuda.

+ Tensorflow.

+ Javascript. Definitely if you come from a C angle. Playing with modern frameworks doesn't hurt!

Of course those are not just tools, but they all help understanding what's the best tool for the job. :-)


- Git - quite self-explanatory

- Visual Studio Code - I like more than the paid version (that I have a subscription for)

- SQL Server Management Studio execution plan modes - it is a gem that few people know about and even fewer can utilize it effectively to make 10 second queries run in 30 milliseconds.

- Composer (PHP) as my first encounter with package management, years ago.

- Notepad++ - the only mini-tool available in production

- Virtual machines and snapshots. Never fear again about ruining the work environment and helping a lot with having separate dev/stage/prod environments without breaking the bank. Also a way to split and separate my environments, the laptop is for emails, VM's for everything else.

PS. I am not a programmer, I am officially a manager and mostly an architect, but I write code to keep current


There is a paid version of VS Code?


Maybe the person meant to say Visual Studio Code is preferable to Visual Studio.


I feel that Visual Studio Code (a free programmer's text editor) suffers from its poor branding -- it's too easily confused with Visual Studio (a paid IDE), so folks dismiss it out of hand because of the perception that it's not free or is only for MS languages like C# (neither is true). Microsoft should have called it something else.

As a Vim user of 2 decades, I have to confess something: VS Code is a far better programmer's editor than Vim. It's pretty responsive, has seamless (and better) integration with language servers [1] than Vim, has better autocomplete than Vim (due to said language server) and the Vim keybindings are not terrible.

For the first time, refactoring (renaming variables) is easy because the editor now understands the language-specific structure of the code as opposed to the code being treated as a long string. It's possible to attach a language server to Vim but it's fiddly and doesn't work that well.

[1] https://langserver.org/


I have no problems using LSPs for C/C++ (via clangs), rust (rust-analyzer), Python (pyls), JS/TypeScript, CSS, bash, and more in Vim (Well, NeoVim) with the autozimu language client.


The experience is a bit different in VS Code.

To me, in Vim it feels a little bit like one is calling an external program (which one is), and the monospaced text-only UI constrains what can be shown.

In VS Code, the experience is more integrated (because the GUI isn't constrained to a text interface), things update live, so you can preview the changes before applying.

I've tried both, both work but I find myself preferring the VS Code experience.


Visual Studio is paid. I don't qualify for the Community Edition, all other editions are paid.


Visual Studio Code and Visual Studio are two completely different things.


Immutable infrastructure / functional programming / crash-only code. In conjunction with lots of docker, ansible, studying FP, and trying to adopt these principles in all my main languages (python, go, c, c++, bash)

Those 3 ideas all kind of mushed together radically changed my overall approach to development. What do they all have in common? Avoiding complex state.

State is hard to reason about. Each piece of state that can interact introduces combinatorially-growing complexity.

Crash-only code may seem like the odd one out but think about it: if your program dying at literally any point has the potential for data corruption, you probably have some sort of issue with state management.

I guess you could call it anarchy coding, because it tries to completely eliminate state ;)


- vim, grep, regex, cli (i.e. fundamental text tools, no “ide” plugins)

- no “intellisense“, only word completion

- write down my operative “what do I do now” stack on paper (helps to not lose yourself in a forest of thoughts and fixes)

The second one I found very effective, because instead of guessing what function or method does or how many of them are there, you go to man/html docs and read on what it really is, how it works, and what its error modes are. This way I learn much more about APIs that I use, read rationales, caveats and more.

Also, intellisense when done wrong (and it often is) gets in your way and creates non-deterministic time-dependent modes for your input, even worse than vim-anxious people usually criticize it for.


I do think that your keyboard can help not only mitigate health risks associated with typing all day, but can make the actual act of writing code go from annoying to enjoyable. What works for you is a subjective question (I don't think that mechanical keyboards are the best for everyone, if I'm hoenst), but a good keyboard can really go a long way.


Guess an ergonomic keyboard could help prevent some injuries.


Pico. Without it, I'd still be searching how to quit vim :)

But jokes aside, all kinds of real-time log aggregators make big difference for me. Once upon a time, I had acquired first huge customer and had to set up analytics service as a single-tenant deployment. I acquired better servers than usual, installed the service and started processing the traffic only to find out that we can handle only 1/50 of the contracted traffic. For the next 48h or so I pushed hundreds of changes to production and watched realtime charts in Splunk going from 1/50, 2/50 to 50/50. I saved the contract and saved the company and went to bed.


Continuous Integration tools.

I started with Jenkins and hated it, because it really encouraged configuration through an awful web interface. CI configs should live in version control!

(I know you can do that with Jenkins these days but I still haven't figured out how myself).

Then I got going with Travis CI and loved it - then Circle CI, then GitLab CI and today I'm mostly using GitHub Actions.

Every single personal project I build uses CI now. My tests, packaging and deploys are all automated from day one. It's an enormous productivity boost and lets me manage way more projects without worrying that I'll forget how to test them / package them / deploy them.


Property-based testing.

I believe it can be a real game changer. It can enable a normal programmer to create the same quality as an excellent programmer.

An introduction: https://fsharpforfunandprofit.com/posts/property-based-testi...


- Erlang/Elixir: taught me functional programming, pattern matching, recursion, supervision, how to achieve fault tolerance

- Docker: taught me about server management and immutable infrastructure; paved the way for a lot of concepts (containers are surprisingly foreign for a lot of people)

Other than that: learning about different databases in general and learning how they solved the problems they had and why they solved them that way; the same problems come up over and over and it's important to be able to recognize good ideas from bad ones as well as when to recognize solved problems from novel ones


Touch typing - learning it during this WFH phase, as much more communication is done typing.


Putting this in my hosts file helped me maintain my focus.

127.0.0.1 news.ycombinator.com

127.0.0.1 reddit.com

127.0.0.1 facebook.com


Yhea this is nice but this simply does not work on all websites. The edge between working and browsing the web can be very thin on some websites. For Facebook it is easy just block it completely but reddit and youtube are difficult so what I do is that I use a plugin called stayfocused it allows you to set a amount of time (for me 30 min) that you are allowed per day to be on the websites you listed. On my second browser I place no restrictions like that but use ublock origin to block everyting untill it is broken. So for example i remove the recommendations from youtube and it's search bar. On reddit I remove the list view so only a single post shows correctly. For me this works way better because the rules are fair and that makes sure you do not just remove the entries from your host file for example just a few days after you entered them.


Why is reddit hard to block? I put that in my hosts from time to time and it works fine.

The real problem is how incredibly hard it is to set edit the hosts file on mobile. At least without using a VPN app or something.


When you are programming in some lesser known graphics frameworks like libgdx or when you are using OpenGl then the special Reddit subpages are more active than stackoverflow. For example I implemented a framebuffer in OpenGL wrong a few weeks ago but the output then is just a black screen. Some redditor mentioned that specific thing in a comment on /r/OpenGL


Even better than that:

1. apps like freedom, selfcontrol.app, or coldturkey

2. regular sleep and exercise. If you find it hard to go to sleep on time, lie down in a dark room for an hour and listen to an audiobook or call your family.

3. When you start a thing, write down the goal and the "why" behind the goal.

4. Find ways to identify the incremental signs-of-success in any task. Apply the TDD mindset to other things.

5. Pomodoro technique to combine 3 and 4.


I waste time when I go to YouTube because of all the recommendations I get. I added custom css to Safari and added display: none to all those divs so now I never see recommendations and just watch the thing I'm looking for.


Lol, same for Facebook here. I don't have an account (I had one years ago), but the Internet is full of FB links. Considering doing the same for Twitter, even if I never used it (it is not so popular in Europe).


for good measure, here are 83k additional domains:

https://github.com/StevenBlack/hosts


I've been using the Leechblock extension on Firefox. It lets you set browsing quotas per website. It has been good for reducing time wasting habits.


But. . .then. . .how. . .did you get to this site?


:)

I have a dedicated "time-wasting" computer - this helps me recognize when I'm spending too much time doing things that may not be important.


Can't imagine writing code without Jetbrains IDEs. VSCode is great for lightweight editing. On the other hand, I have other tools like CopyEmPaste: https://apps.apple.com/us/app/copyem-paste-clipboard-mgr/id8... This is a great clipboard manager and helps a lot with Copy-Pasting.


Perl. CLI one-liners or small one-off scripts.

You can do so much automation and data munging so quickly with it that it really makes you feel like you have superpowers.


Yeah, I still use perl for most all the server side stuff I have to do.


Read the docs!! Even the bad ones are better than none.

I can't count the amount of times I've been working with a more junior engineer and I'm just rattling off answers to their questions..then at the end I realize that I have 5-6+ tabs worth of documentation open that I've simply been referencing throughout our conversation but they think I'm some walking encyclopedia of knowledge.


To me, it's functional programming in general and Haskell specifically. Thinking in function actually helps me become a better programmer !


Lots of great tips/tools here. Let me add this to the list:

Dash https://kapeli.com/dash

Makes browsing documentation really fast and convenient. Have that bound to F1 in my emacs to lookup the current word at point. So you instantly zap to man pages, lua reference manual, python docs, perl docs, ... you name it. Highly recommended.


1) Emacs. Still the king of text editors, with even the latest challengers (e.g., Visual Studio Code) proving unworthy. Nothing beats an editor you modify for the task at hand live as you edit.

2) Scheme. It started with writing plugins for GIMP in its embedded Scheme in the 90s, then I moved on to Guile -- but Scheme not only made programming fun, it put solving a greater class of problems within my reach, especially when I had to solve them quickly. When I need to explore an algorithm or rough out a prototype for how a system might work, I reach for Scheme -- and even when I'm not working in Scheme or another Lisp, I bring its lessons with me.

3) Darcs. This was my first DVCS, before git, and it enabled me to more easily use version control on my own independent projects and prepared me for a git workflow.

4) Linux, and open source in general -- for providing me with a free OS, free tools, and lots of code to examine for ideas and inspiration. Back when I was faffing about with Windows programming as a teenager, this was a real game changer.


Going to the source!

Using a framework or library? You'll get so much better so much faster if you look at what the authors have done.

It's often (I find) easier than looking at random open source projects since you're working closely with the tool anyway. It will help you debug or prevent weird things, teach you techniques you didn't know, and give you a sense for what to strive for.


Using an actual, proper IDE instead of text editors with plugins that have so-so language support.

Learning how to operate a computer efficiently. Use keyboard shortcuts. Set up scripts for commonly used tasks. Configure our environment and operating system so it's less annoying.

Being efficient at using a computer also makes you annoyed at poor UIs, and as a result, better at building great apps.

The ability to write things down, and finding joy in writing documentation. I'm continously surprised at how many people will have a 3 hour meeting and just not write anything down, or never make any comments or READMEs in their projects.

Understanding to use the right tool for the job. You don't need to use the latest and greatest container orchestration, library, or whatever.

Not caring about non-business effecting minutiae, like spending time doing code-style nitpicks in pull request comments (when automated tools can do the job), has made me more efficient and less tolerating of the tendency programmers have to focus on non-important things.


ESLint (and other linters, preferably used with the strictest rules) - while at the very beginning it was annoying, it helped me write much cleaner code.

Longer version here: https://p.migdal.pl/2020/03/02/types-tests-typescript.html


Prettier, it helps me having clean code without worrying about writing clean code :)

It does take a bit to get used to it though, but then it's a lifechanger.


Maybe the opposite of many other comments but... macOS. Limited choices and customisations, but less is better. Not having to worry about "stuff" is something that goes easily undervalued.


That’s exactly why I’m using Ubuntu instead a more customized linux distro atm. I will say that after 2 years and a couple of updates, the idea of a rolling release version is sounding better.


It can sound stupid but I’m newbie so for me: * Firebase - moving my ideas from local host to online and be able to present this / get feedback. That was/is motivating me to solve more complex problems.

* JS challenges website like coderbyte edabit

* honorable to mention- Hardcore Functional Programming in JavaScript, v2 from front end masters


I started my career programming in Vim on VT100 terminals (https://en.wikipedia.org/wiki/VT100) - yes, I am old.

There was no IDE, auto-complete, visual debugger etc. You really had to think before you typed any code. Even fixing compilation error was a pain - you had to note down the line number and the error, go back and open vim, fix the error, exit vim, re-compile and rinse and repeat. Taught me the paradigm of thinking hard about the problem before starting to code and I think that has helped a lot.

This was still better than the punch-card days I have heard but I personally never had the experience.


Not strictly a better programmer but a more efficient one.

- FZF (Fuzzy Finder) really speeds thing up in bash when searching your history.

- Vim (as others have mentioned). With the plugin vim-fugitive (a git related vim plugin). I love Gblame in vim-fugitive, as being able to look at the history of lines of code and stepping back in time can at times be very useful.

- Creating various aliases. For example in git I have little aliases which just make me faster. "st" = "status", "rbi" = "rebase --interactive", "ci' = "commit", etc...

- Using The Silver Searcher (ag) or Ripgrep (rg) instead of grep. Much faster than using grep.

- Using 'fd' instead of find. Like above it is a lot faster.


I've installed rg and aliased it to to grep, so the muscle memory stays the same but underneath the tool is changed.

Also works for

ls -> exa

cat -> bat


exa completely breaks ls arguments/switches and as such is a hard pass for me.


I have since switched to lsd: https://github.com/Peltoche/lsd


Using a step debugger and stepping through all the code I just wrote, or using it to explore code other people wrote.

Surprisingly this seems to be a very uncommon use for a debugger, some programmers are mislead by the name and only use it for actual debugging ;)


Typescript - defining interfaces before doing any work is like visual planning. Adding it to old projects highlights a lot of unknown issues.

VSCode - right click a keyword > go to definition made me more comfortable navigating code written by other people.


- Debugger - doesn't matter which one - just stop using echo / print for complex debugging.

- Good IDE - I prefer Jetbrains products, once you learn all the shortcuts for the various functions it just makes things much faster than constantly switching between windows.

- Sublime text - Not as an IDE but columnar editing and fast construction / munging of data files is crucial for efficiency

- Linux / Bash / Shell - learn the tools - sure you can write a script to do most things but learning how to pipe some commands together to get some certain output is universally useful. Learn the basic VIM key bindings, you should be able to comfortably copy,paste,save,select etc


Docker. Being able to develop/ deploy code across a team and business is invaluable.


100%.. for all the crap Docker gets these days, it has been a great tool for my software dev processes.


I don't know how the infrastructure was before docker was introduced, but for me it made things more miserable. Build times are longer, if you work on OS X it eats up battery and causes it to heat up (because on OS X it runs inside of VM). Makes things so much harder each time you make change you have to rebuild it to run it, unless you do some tricks that aren't always possible in all languages. You can do development inside of a docker, but that also has its own issues, primarily you can't use your fancy IDE.


I don't mean this as a counterargument, but I've become much more efficient now that I've been able to leave docker behind (changed jobs). Docker solves some real problems, but it had real costs for me.


MacBook Pro.

After years of dealing with a Windows workstation, I went into credit card debt to purchase a quality laptop computer for my development needs.

I have had the same one for 6 years and found little reason to buy a new MBP.


I agree with you entirely. In terms of productivity, having been given a MacBook Pro at the places I've worked, versus the hoops I needed to climb through to try and develop on Windows in my free time: The experience is miles beyond. After my PC died, I explicitly bought a MacBook Pro and love it as a daily driver and developing on it.

Homebrew (package manager). A terminal that speaks an actual standard (no weird CMD.exe, no PowerShell.) gcc built in.

Note: All these benefits are just benefits garnered by using a system with a unix base. I should've just committed and installed a dual-bootable Linux partition on my Windows machine a long time ago.

Unfortunately I have a hard time explaining exactly why developing on Windows is such a pain... The best success I've had would be going through Git Bash which always felt like a hack, and that's just for nodejs, if I want to compile a C/C++ program the whole thing is a nightmare with Visual Studio. I think OSs should be judged on 'time to compile a C/C++ program' from a fresh install.

Hell, I think 'the tool that makes me a much better programmer' is being terminal-native. Get used to unix systems, they're wonderful.

/rant


Lenovo ThinkPads are similar in build quality and processing power while being significantly cheaper and they have a usable keyboard. Same for Dell XPS, I believe, but my company is using ThinkPads.

Windows is usable even for node and git development nowadays.


Can I ask if you've tried using one with Linux (Ubuntu or Mint)? I'm thinking of getting a ThinkPad next but I would prefer to use Linux over Windows and I'm wondering how well it runs Linux.


I used to work on MBPs and then switched last year to a Thinkpad X1E running Mint and I couldn’t be happier. Do make sure though to search for potential problems for the specific Thinkpad model that you pick. (And spare a few dollars to support Mint development if you choose it and can afford it!)


Thinkpads are very popular among the Linux crowd.

The only thing that doesn't work in mine is the fingerprint reader.


Yes same here. It works 1 time out of 10. Really annoying


In general, Thinkpads work great with Linux. You'll want to check your specific model though, especially if you're buying a version that's just been released.


Running Ubuntu 20.04 on my P53. The Internet told me I needed to check the BIOS was up to date and check a couple of things were set properly before installing, which I did, and everything seems fine. Before that it was a T430 which was also fine.


I'm using Ubuntu on a ThinkPad x230. Works flawlessly for years.


Learning how to use the compiler and type system (if your language has them) to get as close as possible to "if it compiles, it works", is invaluable.

Novice programmers just want the code not to error out, and compile, they just want the compiler errors to go away, while more experienced programmers learn to use the errors as a todo list of sorts, at first, then as a guide for refactorings and fixing bugs.

Learn to harness the compiler and type system to help you, and your coding will improve 10x in both velocity and quality.


NCrunch: https://www.ncrunch.net

It has helped me so much in applying TDD rigorously in every app that I write by giving me very quick feedback on code I write.

The simple fact you can refactor something and within seconds know if everything still works is a massive time saver and that allows me to write better tests. It also makes flaky, slow and coupled tests visible as you get used to the speed of test execution that the “bad” ones jump out at you.

Well worth the license


Foreman

https://github.com/ddollar/foreman

When I learned Ruby a while back. I know it doesn’t sound like much, but it was the first tool I’d used in development that showed me how I develop is critically important.

It’s just a simple tool to start and stop several tools in the background, streaming the logs to all of them in one place. At the time, I’d never seen anything like that and felt like I’d been doing it wrong for years.


- vim

- grep (now using ripgrep)

- code navigation

- rust

- functional programming (scheme, Erlang, Haskell, Elixir, etc; structure and interpretation of computer programs).


Understanding that this whole industry is a giant house of cards, bs stacked on top of bs. Don't take anything personally, memorize the language needed to succeed


Years ago I switched from Mac OS/Apple when I became a developer/data scientist to linux based system. (ubuntu)

I understand there are similarities between MAC OS and Linux... etc. Being on the same OS as most of the web servers saved me time in learning the basics of linux, bash, etc.

Also at the time I couldn't afford to keep buying new software license.

The only enterprise software I use is Navicat for user frienldly interface into databases.


In a similar vein, I am a primarily a mac user today after growing up on windows and linux (personally and for developments that run on linux as the end-system). I am quite confident that already having an intimate familiarity with Unix is what really unlocks MacOS as a powerful developer environment. I find myself much more appreciate of the positive differences that it provides over Linux and BSD, and much more understanding of deficits and things it won't let me do than I think I would be without that familiarity.


I don't know Navicat but I use DBeaver and am super happy.


> What have you added to your workflow that's made you much more productive?

A second monitor and tiling window manager (or in my case Magnet, which adds simple tiles to MacOs).

When I added a second monitor to my setup my productivity exploded. It's just really nice to have one window full screen in my IDE and the other window with reference material, a shell, scrolling logs, etc.

Never having to switch away from the code is huge.


Jetbrains tools help point out better ways to do the same task and learn about new language features by identifying slow or compatible code then instantly refactoring to the better or newer version for you. It makes learning new language features easy because it gives you examples in context with code you're familiar with. (among many other ways the jetbrains tools are great)


I wrote a blog post about this:

http://blog.testdouble.com/posts/2020-04-07-favorite-things/

Outside of that, I'd say linters and code formatters have a huge impact on my productivity and code quality. ShellCheck in particular has taught me oodles about shell scripting.


TODO lists for which I use org mode, but you could use practically anything. I like a text editor for this rather than an app, per se, just because it keeps me in the flow. All you need is a place to jot down what you are planning to do next and to be able to arrange the order.

Usually I'll start with pretty high level ideas. If I have a story I'm working on, I'll put the description of the story in my TODO list. Then I'll think for about 5 minutes about what general things need to get done. I'll order these by some priority (doesn't really matter usually, to be honest). Then I'll start working on the first one.

Normally I need to poke into the code to really see what I have to do. I'll often add a sub-task to my first one that says, "Figure out what to do" or something like that. Then I'll do some exploratory coding for a few minutes. As I discover what needs to get done, I write it down in my TODO.

It's hard at first to stop yourself from just writing code, but pulling yourself back for the 20 seconds or so it takes to write down what you are just about to do can be surprisingly valuable. Don't censor yourself either. It's fine to guess what you need to do and then delete stuff that you realise is unnecessary later. As you are coding, any time you think, "Oh, I'm going to need X", add it to the TODO (again, difficult to train yourself to do it consistently!)

Once you get good at this, in my experience you will be quite interruptible. Any time I get distracted, or unfocussed or lack motivation, I just look at the top think on the TODO and say, "I'm just going to do that top thing". It always pulls me in.

I don't always code like this, but every time I do I'm dramatically more productive. I should always code like this, but... sometimes you want a relaxed day ;-)


That's a good question, I'm curious what comes to mind, tools that have helped me as a programmer..

- Git CLI as well as GUI - The latter for me is as essential as knowing (enough of) the commands and options

- grep and find

- TypeScript

- VS Code - Language server integration like type checking as I edit, and go to definition; syntax highlight with personal color scheme - I feel so cozy writing code, the colors enhance understanding, and the aesthetics of it definitely shapes the result to be small and beautiful; remote edit over SSH; keyboard shortcut to apply prettier formatting

- Linux real and virtual machines as "commodity" computing resource, available on (or from) anywhere

- GitHub - It could have been anything else, but having a social platform with an ocean of open-source code to study, discuss, contribute.

- Keeping notes in Markdown, as a kind of personal knowledge database

- Curated collection of libraries for common needs, written from scratch, forked, or as dependencies


Sonar: code formatting nagware for Java. I spent a few months being very strict with code style and it has made me permanently a more productive programmer. I used to look at my code and wonder how to improve it, now I know what to do until I'm happy with it. I've learnt to apply this to other languages. Sonar showed me how to make code consistent, that is easier to read, write, grep and work with.

An unexpected result of being anal about whitespace for a few months was productivity with large code bases.

I no longer have Sonar itself in my workflow. But I defo feel that tool made me a better programmer.

This is not really a recomendation for Sonar but a recomendation to go find a relevant code style tool and work with it until it you are no longer learning code formatting and you are learning code clarity.



* Zsh + Grep + Awk + Vim : Essentially 90% of my daily driving around my PC at work.

* ZimWiki : Have a notebook VCS'd in my home directory sync'd across my PC's. Amazingly easy way to take notes and keep track of work.

* Jetbrains CLion, Pycharm, IntelliJ : The best cross platform IDE. Bar none


JetBrains products help me understand a new codebase more efficiently, and basic use of ls and tail on unix based machines help me parse log files faster. SelfControl on mac helps me block distracting websites and Klokki helps me track my time. Bear is a great note taking app that I've mostly switched to from Evernote. I like GitHub's new Android app.

In terms of actually being a better programmer, I believe anything that helps you collect or parse information about your running code and debug it will improve your life. Sometimes that's print statements, sometimes that's a remote debugging session.

My most regularly used commands are probably `ls -lt | head -20` and `tail -f' or `tail -1000 | grep`, so maybe learning a bit about pipes too.


I quite like nm + ripgrep. There is a lot I don't understand about the C/Unix world (in fairness it's not all in textbooks) but this was the first step to understanding the "unix command line is my IDE" mindset which I found so odd.

Also fd-find, just plain useful.


Interactive interpreters and notebook interfaces.

ipython for Python, then later Jupyter notebooks.

Firebug for JavaScript, which then got built into every browser as the dev toolbar.

These days https://observablehq.com/ for JavaScript too.


StackOverflow (and Stack Exchange network in general).


- visual debuggers

- automated tools for simple refactorings (renaming and the like)

(you can read both of the above as "JetBrains" ;)

- source control

- REPLs

- personal logs / "lab notes" - keeping a record of how I did stuff for the next time I need to do something similar. "source control for non-code" kinda


assert() that is not compiled away in release builds. A miracle discipline drug that makes people think through what they are trying to code.


For me, not having Visual Studio, Jetbrains, etc has made life easier overall. I have to design code with low intellectual overhead, since I no longer have fancy tools detangling spaghetti for me.

So I use BBEdit and Oh My ZSH for just about everything.

For git interaction I love Fork.


QBasic, everything since then has been incremental.


Google + examples

I remember learning to program C in the 1990s as a teenager, and often taking a long time just to learn how to do something very simple. This was mostly because the compiler-provided documentation was sparse, and the books that I had only had a single point of view.

Now, I can Google "[language] [task] example" and read through 2-4 examples of how to do something. Then I can go back and look at the docs and they make significantly more sense. Often what would take 4-6 hours, or longer, to figure out from just the official language / API documentation can be figured out in 20-40 minutes.


The ag the silver searcher

As programer, finding/looking up the code takes your primary time. so after I got into ag, it changes my behavior of checking the code. from somewhat lookup to searching.


A couple of tools so far not mentioned, that have been hugely beneficial to me:

1. Copernic Desktop Search pre version 3. The version I use is over ten years old and still looks and works better than any other desktop search product I've ever used. Instant results when searching your file and network system, presented in a format appropriate view. Please note that the version is important here, it went from being a stunning piece of freeware to a worthy corporate product years ago.

2. Trello. Every non-trivial task I undertake is managed through here, it's so simple and flexible.



I assume they are the same ones as available through the website?

https://www.copernic.com/en/products/desktop-search/previous...

Unfortunately, from version 3 the free version was severely restricted and I assume it has been chipped away at ever since.

eg. https://pinkeyegraphics.co.uk/copernic-desktop-search-3-a-na...

I'm still on 2.x, it's brilliant though.

There are still legit copies floating about somewhere I'm assuming; it was a free product, it's just no longer available from them.


Here is a similar conversation from about a month ago: https://news.ycombinator.com/item?id=23118940


Since you mention "more productive" I'll focus on that first.

Reading the docs and example code for opensource libraries and tools I use when I get stuck, as opposed to trying to figure out how to use them myself. And using google to search for answers to question and heading to Stackoverflow if it's been answered there and reviewing all the answers offered.

It's a habit now, but it took me awhile to break my previous habit of thinking I could figure stuff out without help and I'm much more productive as a result.

As far as tools go, BBEdit is the one I use most.


I'm a Windows programmer, but I use: UltraEdit - to say it's a text editor is to say a Ferrari is a car. Powerbasic - a lot of people don't know PB and Python came out the same year. Fast, powerful, full access to windows API, pointers, classes, tiny EXE's, can use any standard DLL, and can make DLL's. JellyFish - a PowerBasic IDE that is, in my opinion, better than the PB native IDE Paint Shop Pro - a far cheaper and easier alternative to PhotoShop with all the stuff I need

These are the 3 tools I use the most to make my living.


if you're writing JS and don't have Prettier installed, you're missing out. It integrates seemlessly into IDEs and editors, so you can just write code without worrying about formatting.


Solarized Color Scheme - You spent more time looking at your code than you see your spouse. A good color scheme reduces fatigue especially when you are working through your code for long periods.


What color scheme helps with looking at my spouse? Ba dum tss!


Keeping a developer journal, digitally, so it can easily be searched.

(I'm using a simple extension called `vscode-journal` in my text editor, it writes daily markdown files stored on my Dropbox.)


+1! I do this as well, in particular a debugging journal. It allows me to find solutions to problems I've hit before. It's very handy to have a recipe for a fix without having to rediscover it again.


Not every job needs this, but when your company has a lot of code, it helps as a newcomer to spend time writing documentation. Those who “level up” in this skill can write documentation everyone understands, not just other programmers. For instance RML instead of UML: https://seilevel.com/business-analyst-resources/rml-book/ Now, if you have BA resources on your team, or designers/UX folks, then use them. There are teams where other people do the business-understandable diagrams and you’re doing technical ones. But... if your team is smaller, or the code is already written, then it’s just as helpful for you, as the developer, to also be the one communicating how things work to the business and stakeholders. It will help everyone use the same terminology—simplify decision-making—and since you made the documentation while looking at the code or existing APIs, you can assure folks it is relatively complete and accurate. https://www.youtube.com/playlist?list=PL2miG2CzrxakbZswQH-O4... has some videos on business-friendly documentation and SeiLevel folks also contributed to two Microsoft Press books on software requirements and visual models.

I’ve found that mapping out the business domain, or explaining complexity, is at least half my job sometimes. Advocating for SRE best practices seems to be the other half. ;-)

For very large systems, DSMs are interesting. https://mitpress.mit.edu/books/design-structure-matrix-metho... But generally extremely technical and useful only to those creating the DSM.

Oh— also ASTs. Kind of a form of code documentation, but knowing how to use AST Explorer to read and modify a call graph or source code is incredibly liberating. It’s great to suggest we can make breaking changes and fix them in client code using an AST transformation, etc.

Finally, “A Philosophy of Software Design” was a nice quick read that I still get value from, but it doesn’t address how tests and ASTs could help; it covers the evergreen basics though.


Flycut. It's a clipboard manager. I hit cmd+shift+v and I can scroll through my previous clipboard states. It's an amazing upgrade from a one-state-only-clipboard life.

"Flycut is a clean and simple clipboard manager for developers. It based on an open source app called Jumpcut. Flycut is also open source too: http://github.com/TermiT/flycut"


If you're using Windows 10, you already have this: press Windows-v.

The clipboard can also be shared across your Windows devices.


CopyQ is MUCH better & cross platform.


In mainframe ISPF, F1 is usually help and F2 split screen. I have changed that like with F1 when I keep my cursor over a dataset it going into that dataset in view mode and with F2 I have made to swap list it shows all ispf screens I have open.

Swapbar at the bottom of the screen and in SDSF always have SET DISPLAY ON.

Other than that, its hardware lapton keyboard type. I am faster when pgup / pgdn and home/end are separte keys rather than combination of fn and other keys.


git blame inside of the IDE.

line numbers, jumping to lines. shockingly this isn't default in IDE and I frequently still have to learn how to enable it.

autocomplete and quick way to get a list of available methods for an object, based on the actual library or things the object inherits. Javascript IDEs seem to just show a random list of frequently available functions, instead of the actual ones indexed - likely because they are all the same type.


JSLint hands down. There is a lot of sloppiness in JavaScript and that tool’s motto is: JSLint will hurt your feelings.

The idea was a heavily opinionated code validation tool striving to make code subjectively cleaner and clearer for strangers to read. Many people stopped using it because they claimed it was too opinionated, even though every other super popular/trendy JavaScript tool claims to be just as opinionated.


If you're a Java developer and learning pointers I recommend using the Java Visualizer.

https://cscircles.cemc.uwaterloo.ca/java_visualize/

Copy and paste the class you're working on. Include a main() to act as a test driver and then let the Java Visualizer step through your code showing you exactly what is going on.


Sublime has definitely increased my productivity and if I have to select one command, it would be the ability to select and update all instances of the word - https://stackoverflow.com/questions/12162047/how-to-select-a...


The single biggest contributor: vim. Not relying on fancy IDE's with smart autocomplete and all that. You end up learning the internals of a language/library without relying on the IDE to do it for you. It became my weapon of choice around 2009-2010 and now I can't remember the last time I had a syntax error. Browser for documentation and vim is literally all I need.


Just for reference, vim does have built in completion. <C-p> and <C-n> in insert mode will complete a word. Also in insert mode <C-x><C-{n,p,l,f}> are contextual completions (see :h ins-completion for full list, I suggest not using the dictionary because it takes awhile). I have found that contextual completion has changed how I program in vim and it seems to be one of the lesser known features. It is way more powerful if you're using buffers/panes/tabs, which all three have greatly improved my programming as well (in different ways).


The Pomodoro technique has helped my productivity more than anything. So much so that I built Timmy (https://timmytimer.com ) and added social features for some added motivation (all timers are public - think of it like "Twitter for productivity"). Free and no login required if you want to try.


I've had a few people newer people reach out on how to level up recently, I recommended:

- vim, but more of the hotkeys than vim itself for me (ex: idea vim). Productivity gains + always have a usable text editor in any env (ssh, local, etc)

- clean code book, to help know what good code looks like, and gives structure to shoot for, and gain opinion on why one way looks better than others, or not


intellisense and friends.

As a junior developer, method documentation and autocomplete on demand are crutches i leaned on VERY heavily with great success.


- Google

- Pen and Paper for scribbling thoughts

- Bicycle in green areas

- Javascript - closures, async/await, JSON

- SQL - Designing a normalised schema before coding clarifies relations

- Docker - just because it forces you to think about the dependencies of your code explicitly

- Evernote - improves my thinking and cements learning

- JetBrains IDE - I pay a small amount to not have to deal with a mishmash of plugins competing with each other when moving between languages.


Do everything in your power to find out what's actually happening with a bug; don't just guess.

- SQL

- Your metrics system's query language

- jq

- sort / uniq

- grep

- awk

- pyplot

- mapbox (if you have geospatial data)

- your environment's canonical profiler, and how to connect it in situ

- how to poke the relevant APIs with your bare hands (curl, grpcurl, etc)

- how to MITM and inspect the communications between your components (wireshark, charles, certificate trust stores, feature flags to disable pinning, etc)


Your preferred language's debugger. It really does show in coding interviews who has control over their environment and can debug rapidly. It doesn't matter if it's rudimentary pdb (basic python debugger) or LLdb, understanding how to structure code such that it's easily testable -and- debuggable is a hallmark of a senior developer.


Experimenting.

Tools that make it easier to do experiments are good, I tend to think too much and read too much but the only way to get better at programming is to put your ideas to the test and implement them (starting with simplified versions, i.e. experiments).

Modelling is something I want to get into, alloy and tla+ look like great ways to sketch out higher level ideas.


When I finished my post-graduation in "Software Engineering". I feel so productive. I can release anything with very high quality if I have the necessary resources, and rework is a thing from the past. I can control the chaos, release often and confidently still having really good nights of sleep.


In no specific order or magnitude.

1. After or during a tutorial to make a silly easy project to perform synthesis after I learn how to do something. Or using a tutorial and then basing my idea off of that.

2. Understanding when and how to do incremental changes.

3. Understanding the reason of source control and learning a git, subversion and CVS.

4. Learning Python.

6. Learning lisp

7. Emacs and org-mode


Pen & paper.

It helps me getting away from the drive to 'just give it a shot' and forces me to work so much more structured.

I still love to doodle with a pen and different color markers and it allows me to have it permanently in view on my desk with the option to add boxes and arrows as required.


ReSharper, I probably wouldn't have got around to understanding LINQ without its regular prompts.


I'd like to suggest CodeFights, it is great platform for software engineers to practice for cosing interview in then in turn get the desired tech jobs. And one more tool for developers is ProofHub. It is the one place for all your projects, teams and communications.


Focusing on my habits and practices over tools and technology has driven more productivity and sanity in my career.

Early on, finding new tools, languages, libraries and frameworks was exhilarating and something their knowledge was I cared deeply about and worked towards. And soon, I cared more about the tools and the language than the problem itself. Caring about the medium caused more pain and wasted time than joy and productivity when any one of them broke, when they didn't do what I wanted them to do, when I found a new tool that was older and more mature than the one I used, when someone disagreed for no good reason, when someone used a worse tool than ones I knew. I have wasted hours toying around with shiny new things only to realize later what a waste it all was. They all promised, never delivered.

The point where things got better was when I started keeping simple logs.

At work, it was mostly putting into words the problem I was facing, why it was interesting, how have others solved or avoided this, what possible solutions I could find, ideal and realistic solutions, pros and cons of each, whom should I convince for this, who can help, why would they agree or oppose. As things progressed, I kept adding updates and how things actually turned out. In personal life, its been about people and projects. People I've met, people who have helped me, people whom I'm indebted to, people whom I hold dear and those that I keep away from, memories and arguments, what I'm grateful for and what I want to change.

They are interesting to read after the fact. I've learned more from this practice than any person, blog or book. I've been more productive when I see a pattern emerge and going back to how I handled things before. Highly recommend it. Moreover, It helped me find good and bad habits, how I respond to what happens around me, what gets me worried and upset, what makes me happier, more productive. The 'right thing to do' is obvious looking back now but never make sense looking forward. What seemed like promising projects did not turn out so, People who seemed like friends, weren't.

So what tool did I use for this? Email. I kept drafts, one for each idea, problem, story, lesson learned etc across the different email accounts I had. When they were good enough, I'd sent them over to a separate email account to keep it all in one place. Ironically, I started keeping logs after getting frustrated with a good way to keep my notes.

Sun Tzu would say "Make the enemy fear not the weapon, but the hand that wields it."


regular expressions. Both in programs, and in text editors that support regex, where I eliminate the need to write some small utility program by using regex instead. Combine with sed & awk and it's an incredibly powerful combination.


If you can call them a tool, tests would be close to the top of my list. One of the most important metrics for productivity is how quickly you can get feedback if something you did worked. A fast test suite is great for that.


* Arch Linux / Manjaro. AUR is a great resource for the latest packages

* i3 / sway. I've built a number of custom patches on top of it.

* Windows Aero Snap - a close second

* MacOs - Yabai + skhd a very distant third. Mac doesn't just play as well with tiling.

* LibVirt / Virt Manager / CockPit / VFIO. I have a number of VMs emulating mac, windows, and linux. My workstation is all three in one. I also have a local kube cluster spun up via the libvirt api.

* JetBrains - Tool Suite They're tooling is phenomenal

* vim / nvim - My secondary editor for quick changes.

* Visual Studio Code - Primarily for remote sharing work spaces.

* Git - I keep almost everything in a git store of some sort.

* zsh and oh my zsh, great plugins.

* Kitty - terminal with the kittens plugins is great

* Direnv + EnvFile, dynamic loading of container env vars on entering a directory

* GitLab - Despite it's warts the best dev ops cloud agnostic platform I've used.

* Kotlin - I'm able to do pretty much everything in this. From vim over ssh or jetbrains. I use it for infrastructure, k8, mobile, web, and back end.

* Gradle - I curse at it a lot, but less than other build systems. I have templates that allow me to quick start any of the above templates.

* Remote Desktop - I keep a cheap dedicated server I RDP for heavy work loads while away from my workstation

* Reg Ex Pal - to validate reg ex

* Functional Programming

* Markdown / Restructured text - Great way to write docs and draft architectural plans

* MermaidJS, Lucid Chart - Easily embedded in markdown to provide graphs of architectural patterns.

* OpenAPI/AsyncAPI/GraphQL - Schema driven design let's me write the schema first, then generate type safe routes for the server, and clients for the consumers (web/mobile)

* Avro, Protobuf, Json Schema - For defining messages going across the wire.

* Containers - I started with solaris zones and it has been vital to my workflow.

* Docker Compose - This is the lowest barrier to providing a container stack for local dev. Next would be KIND.

* Docsify, Sphinx / etc. Easy way to add architectural docs next to the api docs as part of git pages.

That being said, I generally am unable to use a majority of these in my day job. So I'm operating much slower than I'm capable of.


Extra RAM for your brain - whiteboard film surface on your desk + 4 monitors.


Interesting. How does that work when you want to rest your forearms while typing?


Plot twist, user has no arms.


Bazel. I really like the workflow of how it only selectively runs unit tests.


- a good IDE (contains debugging, code look up, code completion, automation of boring parts like selective recompile, refactoring, linting etc etc)

- version control (unlike you, mostly the basic parts, the safety net)

- testing libraries


Read code. Every code. All the codes. You are striving to learn strategies for organizing abstraction. Tools are great but your thinking will move faster with a larger vocabulary for organization.


- git: specially branching, rebase, and GitHub PR etiquette.

- regex: the deeper topics such as backtracking, look behinds, etc help too.

- How DNS works: how they work, TTLs, etc.

- How TLS works: performance and security tweaking.

- SQL: modern SQL can do quite a lot!


Pytest => https://docs.pytest.org/

Making the smallest, most granular problems out of the bigger ones is the key to vict'ry.


- Lambda Calculus, and Functional programming principles.

- Thinking in terms of types and structure, rather than linear procedures.

- Category theory.

- Vim as a language for editing text.

- Emacs (my own config w/ Evil).

- Nix.

- A documentation specific browser, such as Dash.

- The latest grep flavour - ripgrep.


Rubocop made me a better programmer in terms of writing clean, readable, and consistent code.

Also at the organization level no it keeps codebase consistent no matter how many programmers work on the same codebase.


GDB.


Magit, before I used Magit using git was a chore, now it is a pleasure.


jq - I deal a lot with JSON in my work (who doesn't these days?). jq is a nice tool for transforming and extracting specific data from JSON. Like awk, it's a Turing-complete DSL.


Investing time customizing VS Code. Today, I end to end manage my project from VSC: create new branch, code, publish branch, open PR. I have shortcuts for certain git commands too.


FinderPath[0]. It taught me what macOS looks like in its fs.

[0] https://bahoom.com/finderpath/


* `perf top`

* `scan-build`

* https://github.com/pwndbg/pwndbg, a very comprehensive gdb extension/config.


Apple II, TRS-80. Quake engine. Haskell, F#, Rust. Type theory, domain and test driven design. Philosophy, economy. Linux. Physics engines, PyTorch, GStreamer. Twitter.


Docker. Which hasn’t really helped me get _better_ at writing code. But my God it’s helped my productivity and I’m finishing more and more side projects because of it.


The big thing for me was discovering JetBrains products and how much care and thought is taken in building a product that provides the right tools when you need it.


Incorporating a clipboard manager into my workflow was a game changer for me. It's basically a cache for my mind. I couldn't imagine life without it now.


Which one do you recommend?


While programming in Elixir, iex. I spend most of my time in the REPL, testing my functions, learning new features, and of course the recompile command!



Watching Rich Hickey videos.

Cider REPL

Literate programming in Org mode

Keeping outlines of notes in org mode for everything. Being able to execute the code in those notes is pretty friggin amazing.


Could you give your top 5 Rich Hickey videos for begginers?


my json toolkit has been a huge boon for me, it complements jq very well: https://github.com/tyleradams/json-toolkit

There's tools for converting between json and other data formats like xml as well as misc tools like json-diff which diffs json files and formats the output as json.


New Relic, APM is important, and grokking this or a similar tool (Datadog, AppDynamics, Solarwinds) can only make you much more productive.


Being able to read code from similar projects in Github.

It allows you to do better design, learn how to write readable code, and allow you to move faster.



IDE features may be my biggest productivity boost.


Learning how to use type holes has really helped a lot in recent years. Learning how to grok documentation and source code as well.


- xmonad

No distractions and I can program my window manager to organize my windows.

- git

I was once fan of mercurial, but I like git better now. Everytime I find new interesting ways to use it. Also outside it's traditional uses.

- (n)vim

I use other editors, but I turn them into vim. It is easy to use and makes typing in your solution of your problem in the language of choice less boring. (The coding part)

- awk

God, I love awk. It is like the swiss army knife of unix. You can do everything with it.

- latex

If I want to order my mind by writing a bit or making a nice design, create some documentation. Latex is your friend.

- dot

If I want to sketch something, dot will help. Dot also helps with other stuff, dependencies between things. It is easy to write out a dot file.

- profilers

Any fucking profiler will make your a better programmer. It will test your assumptions about the underlying model.

- debuggers

You can't do things without them ^_^

- static type systems

Learn to think in types is a big win and not a weakness. You can encode properties in type systems.

- mathematics

Learn it, use it. It will give you new ways to think. E.g. algebra is about composability. So first thing I do is when I have a problem is write out the data types and think out its algebra. And it keeps your mind active, just churn through some mathematical topic once in a while. I loved type theory. Lambda calculus, abstract algebra, linear algebra and am know learning point set topology.

- esoteric languages

They are fun and force you to rethink what computation is. Design a couple yourself. I can compute in any language with weird constraints. I just need to build a machine in it.

- keep playing around

Not everything you do needs to be a project, just make something useless. E.g. I like jotting down shit in netlogo, which is utterly useless, but fun to watch.

- zsh

Where am I without you? ZSH can interface with everything. I write small tools from finding ec2 instances in the cloud by tag to presenting searchable menus binded by keys with a little help of small programs.

- the core utils

Learn them by heart, you will never have to click ever.

- Learning new programming paradigms

I loved stack based programming, functional programming, thinking in excel (data flow). There is no one solution to everything.

- domain specific languages

Learn to make them, it is fun and an extremely powerful tool in the programmers toolbox.

- parsers

Don't be afraid of them, they are your friend. And often easier than regexes.

- learn how to search

Big skill, not everybody does this right.

- Programming notebooks

Why is this not standard? Mathematica is brilliant with this and while I don't like python, but jupyter is very good.

- IDE's

They are useful. Don't be vain.

- Make designs

Plan what you are going to make. That helps a lot.

- Property style testing

Checkout quickcheck ^_^

That is my list. It is not exhaustive.


"a much better programmer" != "much more productive" (At least from my point of view)

I'm going to answer on the latter, because I can't say if I'm a much better programmer than, say 5 years ago (But I blame that to the fact that the product I develop is "frozen" and I don't do much than small bug hunting and / or forensics when something goes wrong; or "janitoring" as I refer to it nowadays).

In no particular order, these are the tools that over the years have made my life incredible easier and let me focus on the problem at hand, and not solving the problems I had to deal before solving the problems I had to solve:

1. Git. 12 years ago, I was the sole developer at my company, and I had to strong arm my boss (And company owner) to ditch CVS for Git. Probably one of the best company decisions I was responsible off (Being the next one getting the company on Slack)

2. xDebug. When I discovered xDebug (PHP) my life changed overnight. As I said I don't do much coding these days. When I was full on coding, I had hacked a way to be able to work with xDebug with a small class made specifically for debugging, that allowed me to do "code hot reload" until I hammered my code in place.

3. Any tool by JetBrains. I have a lifetime licence for PHPStorm 2016.x (You get that after paying the license for a year). And I currently am a paying customer of WebStorm. I plan to keep paying for it until I make it to the year, because then you get a lifetime license. I know IDEs and editors are bound for flame wars, but honestly, for people like me that are not really invested in getting an expert command of their tools, JetBrains gets you the most bang for your money right out of the box.

4. Virtual box, then Vagrant, and these days docker. Virtualization for me was the best way to fix the "but it works on my box" syndrome some 10 years ago. I discovered VBox and then vagrant, and I've been a happy customer for years. I've been eyeing docker for some time now and the quarantine gave me the time to investigate it on my own. With VBox / Vagrant I had the problem that when moving I had to move large archives for the different disks I had accumulated over the years. Now with docker I just need to move the data, the code, and the docker file that builds the container that I use. I also use docker to learn other things like node and python. For python I've been using Miniconda for some time, but I didn't like the way it handles - nor had the time to learn how to correctly - environments. Same with node and the node_modules hell. With both, I use small set up files where I store a default configuration so whatever package I get is saved on shared volumes on my machine, and the same on code. That way I decide what to share where, and getting rid of something is as easy as removing a folder.

5. Linux. I started learning me some Linux when I started this job 13 years ago, and over the years it grew on me. Last year my notebook disk gave up, and when I changed it I installed Debian on it. This year the same happened to my desktop box, and I did the same. Even thought there are programs that I do miss from Windows the reality is that many of them I can run with Wine, and the others get replaced for something else that is, if not better, at least on par with their Window counterparts. And I keep finding tools that are incredible awesome (Like mpv for video playing)


- Regex. I don't think anyone's a true developer until they've mastered using regex where globbing would be used. Using regex for globbing is an excellent place to start learning.

- Unit testing. No true developer can call themselves such until they can programmatically verify that their code works to some explicit specification.

There's tons of other things that are bonus points. But I wouldn't call anyone a true developer until they understand and use both of those.


kdb/q: its a programming language more than a tool. But learning it has made me more efficient in other languages. It's vector approach to programming makes me think about how to leverage similar features in python for example. itertools library in Python is mostly inspired by APL, an ancestor of q


Still no mention of clipboard manager?


Three books:

* Nonviolent Communication

* Crucial Conversations

* Writing Without Bullshit


In recent years:

1) Ripgrep

2) Upgrading my Lenovo T430 laptop to a P53


My only beef with ripgrep is that "rg" is only a small typo away from "rm".

One time I almost deleted my working directory but luckly I had passed the -i flag for case insensitive search :)


Have you considered creating an alias - such as "rip" or "grrr" - for using ripgrep to help avoid that concern of rg vs rm?


The "rm" thing was only (almost) a problem that time because I had passed a list of files to ripgrep, instead of passing it a list of directories to do a recursive search. rm by default don't erase directories.

But now that you mention it, creating an alias gr=rg sounds like a no-brainer. Thanks.


stgit Kind of like quilt on top of git. It lets you keep a lot of balls in the air all at once with very little effort. http://www.procode.org/stgit/


Doing lots of small test or benchmark programs to better understand how the language works.


All of the following made me a better programmer:

1) Functional Programming

2) VIM

3) Kinesis Advantage

4) tmux

Really, I can't think of programming without those four tools.


Focusing on "more efficient" in "better":

- emacs: works for me to code in R, python, connect to remotes, etc

- keyboard shortcuts: got started with them early, and never looked back. Bonus: linux/macOS/emacs text navigation works across all of them (ctrl-f, ctrl-b, ctrl-a, ctrl-k, alt-delete, etc)

- git: managing file versions and file diffs (!)

- GitHub: terrific for project management, even when I'm the only one on the project ;)

- CLI utilities that I love:

    - ag, a better "grep"
    - fd, a better "find"
    - tldr, a better "man"
My dotfiles and new box setup: https://github.com/pavopax/dotfiles

CLI tools:

https://github.com/ggreer/the_silver_searcher

https://github.com/sharkdp/fd/

https://github.com/tldr-pages/tldr

EDIT: formatting


... then, for data science tasks, these were a revelation:

- R's dplyr and wider tidyverse - python scikit-learn

They are terrific APIs that, after I've gained deep familiarity from experience, are literally a joy to use for me and feel like I've "leveled up" (way up)


For me it’s definitely Elixir


Interesting. Why? What does it have (or haven't) that made you a better programmer?


Well, it's certainly not tools that let me add lines of code.


Tmux, Emacs, Perl, jQuery.


An automated test suite with snapshots and contracts for data.


* version control

* static typing (golang)

* unit tests


Wait I thought everyone was switching to dwm now..


xargs! Instant concurrency for small scripts.


Could you go into more detail about how you're using xargs? I'm having trouble understanding how it can be used for concurrency.


xargs --max-procs N (or -P N) starts up to N instances of the called program in parallel, at least with GNU xargs.


The linter has been my best programming coach.


fzf tmux Vim+plugins+ language servers. lldb/gdb, languages repl's in general(ie. ghci for haskell)


* AWK - I read "The Awk Programming Language"[0] cover to cover. It's a very well written text, and it not all that long. The examples are very impressive. In terms of bang-for-the-buck of learning tools, AWK has definitely given me the most mileage. I probably use it at least a dozen times a day just for little things. However, you can wield it to write some very powerful scripts in very short amounts of time.

* Make - I use Make for almost every project I write, even if the rules are just .PHONY shortcuts. Make solves a huge set of problems relating to the order in which things need to be run/built very well. There are arguable better tools, but Make is widely deployed, widely used, widely understood, and solve the problem well enough for a lot of small to medium projects (and some large ones too!). I got asked about it enough that I wrote an introductory guide for it[1] (disclaimer: self promotion of my own site, but I don't have any ads or make any money). If you feel like Make has a lot of legacy crust built up over the years, you should read[2].

* Graphviz[3] - a huge number of ad-hoc data structures that you will build for your projects can be hard to visualize, Graphviz makes it easier. One tactic I've found useful is to loop over nested structs using their memory address as the identifier in Graphviz, struct fields as text annotations, and nested struct pointers as outgoing links. This might sound fancy, but you can probably write an export_to_graphviz() function for your project in under 50 lines of C. Because the syntax is simple, it's very easy to generate Graphviz from pretty much any language out there.

* Xpath - if you've ever wanted to do even simple web scraping or XML parsing, do yourself the favor of learning Xpath. It's a very powerful way of querying XML-like documents. I learned it by writing bots in Selenium for an internship, but nowadays I mostly do very simple web scraping for personal projects. To that end, I wrote a little tool[4] to grab the contents of a page, run a query, and print the results out on the console.

* Not a tool per se, but pick some kind of "personal knowledge management" type of solution and use it religiously. I like Joplin[5], but there are a million out there (Evernote, OneNote, ZimWiki, TiddlyWiki, VimWiki, Emacs Org-Mode, and many, many more). Being able to refer to earlier notes is invaluable for long-running projects.

* Also not very specific - learn the scripting language for your platform. In UNIX-land that's sh (or Bash), and in Windows that's PowerShell. Bash and PowerShell both have benefits and drawbacks, and you probably shouldn't write "real programs" in either. But knowing how to script whatever platform you're using buys you a lot.

* One more, also non-specific one - learn an interpreted language. Nowadays people like Python, but Perl, TCL, Lua, JS, and others could all be valid choices. These are great for prototyping ideas that you will later port to the language you really use (if it isn't on that list already), or for writing little tools or utilities for yourself to use. Which one you choose will depend on what library ecosystem is most relevant to your work.

0 - https://www.amazon.com/AWK-Programming-Language-Alfred-Aho/d...

1 - http://cdaniels.net/2017-07-15-guide-to-make.html

2 - https://tech.davis-hansson.com/p/make/

3 - https://graphviz.org/

4 - https://git.sr.ht/~charles/charles-util/tree/master/bin/quer...

5 - https://joplinapp.org/


* Linux

* i3

* git

* Elixir (functional programming concepts)


magit


perf and pprof.


Emacs, Magit.


Git and grep!


I use ripgrep (`rg`) much more than I use grep now. Wish I'd discovered it sooner.


Oh yeah, 100% this!


and `git grep` :)


Postico


Linux


PyCharm


Node Red


Listening


Pycharm


Emacs.


Github


Grubhub


Free Software


So much useless information in this thread.

The best way to become a better programmer is to read more code and understand what it does and think about different ways to implement the same feature.

Programming is problem solving at its core. Everything your mentioning is secondary to achieving that goal.


You're right, let's go back to punchcards.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: