Lately I've been thinking about the limitations that being human puts on a programmer. We work very hard (and should) to reduce cognitive load on ourselves through good development tools that act as crutches for memory---REPL's and good debuggers allow us to try something and see what happens, as opposed to simulating a multivariate operation in our heads. Intellisense and easily-available docs allow us to cheat a bit on learning(and what I really mean is memorizing) API's.
But what if we could do these things without relying on JIT computer aids? What if I could simulate more levels of abstraction in my head? The private dream of many a Lisper(this one, anyway)---writing programs that write programs that write programs---would be a bit more attainable.
I've been using Anki with great success to learn API's and keyboard shortcuts (Anki+emacs is a match made in heaven), but I despaired at my inability to hold the whole stack, from top to bottom, in my head at once.
So I posted a badly phrased question on Stack Exchange, ("How can I increase the number of levels of abstraction I can reason about at once?), and kept Googling. Eventually I came across Jaeggi's research. It looks promising, but hasn't passed the wide-replication test yet. I'm glad this came up on HN, because I'm eager to see more research in the area and get come confirmation or refutation of the findings.
In the meantime, the premise of Jaeggi's conclusion raises two questions---if working memory can be trained, can it decay with disuse? In that case, are with our fancy debugging tools mere shadows of Real Programmers that used to walk the earth? The other question is this---if the brain is likened to a computer, working memory corresponds to RAM. If we are successful at training working memory and making people "smarter," will we in the future face a bottleneck of processing speed rather than space?
One thing I've noticed about Anki is that the pain of memorizing and retaining stuff has been significantly reduced. As such, I tend to be much more willing to "just memorize the whole thing" in a lot of cases. I guess a good analogy is the impact faster CPU's/more memory has had on programming---when they stop being the limiting factor, we start using languages adapted to us, rather than them. Similarly, as memorization has become "cheaper" to me, I find myself making choices that involve more memorization. A few days ago I made a deck specifically for all the gmail keyboard shortcuts. It would not normally be worth my time to commit those to memory, but because the cost has fallen so much, it didn't seem like a bad idea.
Personally, I think that attaining the state of flow is what makes programming enjoyable, and I haven't experienced it in many years. There are just too many APIs, too much poor documentation, too many bugs, and too many languages I have to switch between for me to ever get into the 'zone'.
I cope by taking every chance I get in my day job to experience these short bursts of pure creation. Do I need an interesting algorithm implemented? I'll hand code it myself rather than spend an hour or so trying to find and shoehorn the "standard" implementation into my application. I love the rare occasions where I notice a, dare I say, "clever" solution* to a problem. Coming up with and implementing these cases are a joy. I admit this isn't always the best way to solve a problem from a "software engineering" standpoint. But it keeps me in the game.
*Not clever as in obscure, WTF worthy; but clever as in elegant and expressive, albeit perhaps inaccessible to less skilled coders.