I personally think the amount of library functions memorized is the biggest difference between average programmers and the so-called 10x programmers. An extremely high level of fluency with a programming language environment is invaluable when it comes to efficiency and code quality. It is analogous to conversing in a language where one has to look up every other word compared to a native speaker who has a deep fluency. It's not just a matter of speed; the depth and quality of thought is orders of magnitude greater in the fluent speaker. The more of the heavy lifting you can do unconsciously, a vastly higher level of output one is able to produce for the same amount of mental effort. We accept this in just about every profession, yet we resist when it comes to programming. Personally, I'm glad my doctor memorizes a large amount of the facts he uses in his day-to-day work.
It is unfortunate that just about every new language brings with it a new set of standard libraries that we must learn to use effectively. We give up so much expertise and efficiency when we don't allow ourselves to build up a high level of familiarity with them. I can imagine a future where there is a standard library of methods that every programmer attempts to memorize and that every new programming language is programmed specifically for this standard library. We are already seeing this with language frameworks like the JVM and .NET, but we need to go even further. Hopefully libraries themselves will be created specifically to be easy to memorize.
Now, the cost is approximately <5 minutes per flashcard to give you roughly 95% recall (more details: http://www.gwern.net/Spaced%20repetition#what-to-add ); so the question is, does not knowing something you could've put into a flashcard - the consequences of not knowing it, the time it takes to look it up, whether you will even know to look it up, etc - outweighs the cost of a few minutes' review over that time period.
I think for a lot of stuff this can be true: shell scripting is not going away in 20 years, for example. If you're programming in it routinely, a language you will use for only 10 years can be worth memorizing for a while. If you're using a tool every day for the next year, there's going to be a lot worth memorizing there too.
> In programming, there are very few things that will still
> be the same in 20+ years and we have no idea what those
> things are.
SRS is well designed for gradually changing bodies of knowledge. New cards are added and come up more often, older cards can be retired but they rarely come up anyway.
I don't know, what about the ability to think outside the box, a solid grasp of the fundamentals, or the capability to understand and create high level abstractions?
Seems like you're saying the ability to glue together a bunch of library functions is what classifies a great programmer. If that's the case, it should make programming interview tests rather trivial.
The author of the post used PHP as an example and I think it's a terrible example because PHP's standard library and 3rd party libraries were all over the place in terms of naming conventions, structure, duplicated but not quite abstractions, &c...
I've found learning languages like Haskell to be very different - where learning the axioms and postulates is what leads me to implementation specifics. A good example: I was building accumulating recursive functions for a little project and was looking at the code and said this to myself, "This doesn't feel right, this looks like Scheme or Erlang code - not Haskell."
I set out to find any abstractions built into Haskell or 3rd party libraries that handled accumulators - low and behold folds! I've done that numerous times with this language proceeding from fundamentals and the process of abstraction to find first if the abstraction has already been done!
I use Spaced Repetition for learning Haskell - but it isn't names of functions; it is the Monad laws, or Functor laws, &c... that I want to memorize.
I agree, but working from the standard library has it backwards. One should subconsciously know what is in principle possible and then look up/remember how it's called in the language you are currently working in, and if it is not there implement it yourself.
I get the resistance to memorization--I am the laziest SOB I know (and being in the company of programmers, that's no small feat). I got through college almost never taking notes and by just understanding concepts. It worked great for some subjects, namely math, CS, physics, etc. But one thing a math professor said one time in a higher level course for math majors that stuck with me: "If you don't memorize what came before, you will never be able to make new discoveries". In hard math it is expected to memorize the results that came before as this is the only way to discover patterns between them and create new associations.
I think the same applies to programming. Your mind can only hold and manipulate so many units of information at a time. The more abstract those units are, the greater the resulting mental structures will be. We artificially limit ourselves in our resistance to memorization.
I hope this format is useful to you guys. I actually spent the last two days making a video of this, with screen recordings of using Anki and such, and then felt it's a lot more efficient to just get the info on a static HTML page instead.
Any advice appreciated.
Are these users' results actually better? In other words, is their higher level of confidence justified?
1. time saved looking things up
2. knowing what is possible and what tools you have at your disposal built right into the language.
Looks great, I hope you do well.
At this point, I'm seeing the same thing again, but distributed to a larger group of people. Everybody and their cousins are making SRS programs for learning languages, often Chinese, and that giddy feeling I had when encountering the idea of SRS seems to be all over the place!
How I wish I could explain that the context matters and they'd learn faster by actually using these discrete chunks of knowledge in context! That reviewing decontextualized data is a poor way to synthesize most thing! But who would listen? I wouldn't have 4 years ago.
SRS is seductive. It's amazingly good for things in which there is a set number of essentially unrelated pieces of information to memorize (e.g. 5000 common Chinese characters), but it's terrible for learning something more nuanced (e.g. how to read Chinese). I do still keep it as a tool, but it's a special purpose one.
A few months from now Derek is going to be asked what Array(5) does. He will be pissed off that he doesn't know the correct answer, but does now remember to not use the Array constructor because it is ambiguous and the literal syntax is better.
var a = ; a.length = 5; is better than: var a = Array(5)
The former is obvious nonsense, the latter is non-obvious nonsense. If derek is like me, he will then be asked to rate his knowledge, and be pissed off that he now knows more but anki makes him admit to not knowing it.
Good point. When I read about SRS for the first time a few weeks ago, I had roughly the same idea as the author of this article (except I didn't carry it out--a million kudos because he did :) ).
Now I think the difference with using SRS for learning a natural language is the ultimate goal: It's not "I wanted to learn how to program in language X", if I wanted to do that, I'd do some tutorials or try to complete some Project Euler challenges in that language. Instead, it's "I want to spend less time having to look up parameters and usage of functions in language X's standard library, a language I am already quite familiar with, apart from spending too much time reading the documentation".
Now that I think about it, I see another great advantage: It's a personal thing, I don't expect most people to battle with this problem as much as I do, but whenever I need to look up something in the docs, the problem is not only that this takes time, but also that I have the tendency to read way more of the docs than what is relevant to the problem at hand, I click through to other pages that seem interesting, which reference an external site, I look up stuff related to that and get completely sidetracked. Not only does this take an extraordinary amount of time, but also drains mental energy that I should be using to work on the initial programming problem. This lack of focus has gotten so bad that it resulted in severe burn-out and I've basically resigned myself to not pursuing a career in computer programming related jobs, even though that's what I love to do.
Not having to look up things, so I do not need to task-switch (as much) and keep focused on the programming, well I tried so many other things, I really must give this a shot.
When I read about SRS, there were a couple of programs available, is Anki generally considered to be the best (free) one to start with?
The AJATT guy was big on SRS, but he was also watching hours of Japanese TV each day and even playing Japanese music and various other MP3s in his sleep. He got the requisite comprehensible input. He achieved a good level of language skill, but for the time he put in it wasn't exceptional.
L2 acquisition linguists generally agree that the important thing is "massive comprehensible input". This could be reading or listening. There is disagreement on whether if input alone is enough-- hardliners such as Krashen would say yes, others would say no. However, there's little disagreement that input is the most important factor. And an SRS will never keep up with actually reading a book when it comes to input.
3 years ago, I was that guy with a huge SRS deck, reading AJATT and writing my own blog about it. If you're unconvinced with the above, all I can say is make your huge SRS deck, do it daily and then come back in a few years and let me know how it went.
It just always sounded interesting because I had reasonable success with flashcards for single words. In retrospect, that was before I had the ability to read even short stories, so maybe I am remembering the utility of them wrong, or they seemed useful because the alternative was nothing.
I can't see the utility for programming languages (compared to human languages) however as the grammar and vocabulary of programming languages are tiny in comparison. The best way to learn them is to write something with them (IMHO). The available libraries are of course broader but the subset you use tends to differ from person to person and actual use will tend to reinforce what you most often use.
It's the libraries. Deep recall of the standard library of any major language would probably improve programming fluency a lot -- and it might improve code quality too.
I wrote an honours research proposal on this; email me (see profile) if you want a copy.
To compare the groups, I proposed using a few common metrics (SLOC, cyclomatic complexity, Halstead's metrics) to get a gauge of the different size of solutions. My guess was that a more "fluent" student would write shorter and simpler programs simply by not needing to reinvent.
Not really commenting on spaced repetition here, that should be helpful in either case.
At my university the freshman course was, for two years, taught in Haskell. To my eternal shame I managed to dodge most of the first year with a fistful of RPLs, so I never got to sample it.
The result was that lots of students dropped out of computer science and went elsewhere. More than had when being taught something else.
So they switched back to a intro course based on Java (which is their main teaching language for the first two years, C is the second language which is picked up in the 2nd half of the first year). Then you go on to bog-standard Data Structures & Algos / Computational Structures courses, Algo course etc etc which is where you learn the ins and outs of sorting, trees and so forth. Or, in my case, you barely do so because you are a lazy student who didn't do his homework.
The upshot is that you wind up spending most of the intro course teaching the mechanics of Java and motivating students with "interesting" examples. Various graphical geegaws, basically. In such cases students usually aren't using the standard lib, they're using a provided library.
The others were:
* a robust user-tracking protocol for websites (this is the project I went with). Inspired by a business idea I had in 2008; currently the basis of my "big" startup project.
* a "whole-machine" architecture proof-of-concept -- basically a blog app targeting a VPS. Gives you a lot more design options if you can control the architecture from the OS up. Inspired by my eye-blistering hatred for Wordpress.
* "A model of player Agency for software-created, interactive, just-in-time plot generation". There's simply no way this would have fit into a one year project, but it would be very interesting to pursue some day. I was particularly proud of my little taxonomy of plot generation mechanisms. Inspired by chatting to mates about what's wrong with MMORPGs.
Offer is open to other HNers.
You're constantly beeing asked by a english voice to say something in french, and after an hour it asks you to repeat the new words you've learned.
Whenever I write PHP I look up a lot more methods than when I write Ruby, even though I have more experience in PHP (Ruby cheats, a little bit, by having multiple names for many methods, making a wild guess much more likely to be correct)
Not to mention a mix of function/method parameter conventions. Needle, Haystack vs. Haystack, Needle.
As a programming languages geek, a few months ago a realized that every time I came back to a language that I had learned earlier, I'd forgotten a good chunk of it, rendering all my efforts in vain. So I started using Anki. Here are a few tips from my experience:
- Koans are great as source material. They're available in almost every mainstream language, the problems are usually solvable in less than a minute and you can just copy & paste.
- Always open your repl and type your answer. Otherwise you're not really thinking about the problem but just memorizing the answer.
*Edit: BTW, I've got lots of Ruby, Python, Clojure, Erlang/OTP and a few Dylan flash cards, if anyone's interested.
Maybe you could put them on github, perhaps?
Only a few weeks ago I first read about the spaced repetition technique, and my first thought was "what would I want to memorize? maybe I could use this to learn the parameters and usage of Python (or some other language)'s standard library" ... and there it is, on HN, including someone offering a set of flash cards, saving me that work :)
Although creating one's own flash cards is a good first step in learning, I wonder how good it is (in that particular regard) to someone already reasonably familiar with the standard library, just wanting to save time by less looking up in the docs?
Regarding the memorization technique I am unsure if this is the most effective way of remembering things. The book "Moonwalking with Einstein: The Art and Science of Remembering Everything" presents a lot of tricks on how to memorize things that could be more effective than the method described in this article.
And it is a good idea to look things up every now and then because they change. In the olden days if you wanted to open a file you just did it. Nowadays you have to consider things like TOCTOU, process credentials, character sets, permissions to require/set, virtual locations (eg "My Documents", localization), quotas, race conditions, fsyncs and renaming of temporary files etc. Heck even adding two integers can result in security flaws if not done carefully
It sounds very effective but only if you make time for it every day. Combining this with the "Moonwalking" techniques might be best of all.
1) Learn to write idiomatic code.
2) Learn the name of the library functions, so your flow isn't constantly interrupted by looking for what you need.
1) I think, can only be done through reading other people's code. But I've previously thought about doing 2) by using spaced repetition.
Eh, this isn't necessary. I've been programming (and continuously making a concerted effort to expand my programming knowledge) for more than a decade, and memorization of anything simply hasn't been very helpful.
For example I have no idea what Python's regex syntax/library function is, even though I've used it a dozen times in the past. But that doesn't matter: alt-tab to chrome, ctrl-N, "python regexp", stackoverflow pops up with an example, copy-paste, done. Five to ten seconds, max. Far from breaking my flow, it's become an integral part of it.
Zed is the same way, for what it's worth. He talks about it in a Peepcode screencast. "I memorize concepts, not names."
I find, though, that this is one of the most insidious causes of bugs in my code. All the little idiosyncrasies in those functions - do they throw an exception when they fail or return a code? Are regexes multiline by default or not? does re.match() match the whole string to the pattern or just part or it? Time and time again it's my assumptions about these subtle behaviors that creates bugs in my code, and I've actually come to the conclusion that I need to do some SRS type learning to get newer languages into my head.
Not at all. You only need to do it once for the current program you're writing. Now you have a working example to refer back to, within the program. So the lookup time only comes into play the first time you need to do a certain thing. It's a negligible constant factor (about a couple minutes in total per program, and without interrupting flow).
Also, being prejudiced against those who have a bad memory is not helpful.
Knowing the core functions is a huge timesaver.
Not really. By using Google effectively, it's possible to be productive in any language without knowing any of the core functions.
If you just look up function when you need them, you'll memorize the most important ones you use over time, no special practice needed.
So I really can't see how this isn't the worst form of "premature optimization". I guess it's fun if you've got nothing better to do, but I'd never recommend it.
(But I'm not saying you shouldn't read through a language and its libraries so you know what's there conceptually. That's a good thing to do, so you know what to look up later.)
As far as remembering programming language constructs and vocabulary, the way to do it is do projects. You use what you learned, and naturally, you're going to come back to it when you have to fix a bug or implement a new feature. All the while, you're building something good, too.
Another thing. If you're using a well-designed language, the concepts will incrementally build on themselves. They won't just be a bucket of orthogonal concepts, (like PHP, which seems to be mentioned in the article). Therefore, by learning fundamentals, you can more easily understand and decipher more advanced concepts. So, maybe you did forget what something does. But after a few minutes entering commands into a repl, you fully understand it again, because of your crystallized knowledge of other concepts.
I can't imagine a worse way to learn a new programming language than brute memorization.
Then a few days ago I started reading The Memory Book which really emphasizes on visualizing things, making correlations, etc. The effort of visualizing and using imagination really made a difference: in one hour I memorized the list of 50 states forward and backward with almost 100% accuracy after 2-3 days (I had never head of some states before).
Quickly after I realized that my limited english vocabulary was also the weakness in this system, so I started visualizing and adding new words to Anki once again. The result is that my recalling after 24 hours is close to 80-90% for new words, which is encouraging.
I've been doing this for ~1 week, so take it with a grain of salt, but for me it seems that visualizing the words and using SR is helping. It takes 3 times the effort, but as long as I remember things, I don't care.
The idea of applying the system to programming sounds interesting, and definitely worth a try.
It's a well respected way of storing information. But (drawing a terrible analogy) it's a linked list, not a hash.
It's hard to perform a random lookup on something 2/3rds of the way through your mental tour.
SRS works on the well understood phenomena of repeated exposure and tested recall. They're very good at imprinting atomic information even if that information has no context whatsoever. The original research was done with randomly chosen letters.
Having read the entire book was incredibly helpful. Anytime I thought of a feature I wanted to add, or wanted to rearrange the site in a certain way, I knew how to do it-or at least, I knew that there was some pattern that would easily solve exactly what I wanted to do. I made sites for myself, for my school, for my friends-and they were decent. Not fabulous, but my sites were a lot easier to update and maintain than the site my school's $250/hour consultant had built (tables were visibly different on each content page!)
I used the same approach when I finally started writing code professionally 10 years later. In one case I solved a problem in 17 minutes that my coworkers had spent months trying to solve, because I knew that there was an existing function that did exactly what they were looking for. These things were surprisingly hard to find in Google, but having near-encyclopedic knowledge of the language's features made it a lot easier. Instead of searching for a general problem in an obscure language, I knew the exact function name I was searching for and only had to look up syntax.
Memorization is incredibly useful, even in a problem-solving discipline like programming. And having an encyclopedic knowledge of your language is extremely rewarding when you remember a simple way to solve a complex problem :)
Oh, and the scheduler is taylor made for code (code chunks usage is different from word usage, which means the equation parameters optimized for word learning won't work).
Contact me if you want to know more (see profile).
I think memorization doesn't really help understand the fundamentals of any topic. Rote memorization can only get you so far. As concepts get harder and complex, dependence on mere memorization leaves a lot of holes in the understanding of any subject matter.
Specifically in programming, syntax or semantics form just a minor part of overall problem solving exercise. Efficient and effective programming requires repeated use of concepts or paradigms in solutions for diverse problems. Our brain is much more effective in registering and recalling facts or knowledge when that piece of knowledge is exercised in diverse scenarios.
I am a co-founder of Lymboo Math (http://www.lymboo.com), an online math practice program for elementary school children. We built the curriculum that follows the natural learning sequence of math concepts. On top of the comprehensive curriculum we implemented a practice structure that relies on interleaved and spaced practice. Students practice daily on individual topics until they are proficient in that topic. Then they move on to subsequent topics along the prerequisites-based curriculum. Throughout the program the system automatically incorporates spiral reviews of previous topics at regular intervals of time to effectively cement all the acquired knowledge.
What we have found is that children easily forget what they have learned just a weeks earlier, and their performance degrades in the initial mixed spiral reviews. However, as they continue the cycle of (learn--practice--review)*, they show improved performance in subsequent spiral reviews.
Mixed spiral reviews model interleaved (mix of topics) and spaced (in time) practice to enhance our context-switching skills. The neurons in the brain make new connections and store patterns that aid in quick and fluent recall of knowledge.
Interleaved and spaced learning techniques are more than just for memorization.
The existing research says that spaced repetition helps with abstracting and generalizing understanding as well as 'rote memorization': http://www.gwern.net/Spaced%20repetition#abstraction
It is true that without a good understanding of fundamentals the advanced concepts will be difficult to comprehend. But, rote memorization should not be construed as something that lays a strong foundation of fundamentals. Even if we are able to recall basic concepts 'learned' via rote memorization, their applicability to understanding advanced concepts is very limited.
One idea for further expansion: go beyond just having specific language syntax, and also include cards on more generic algorithms and data strucutres. You could key it by time and space complexity, invariants, maybe also a pseudocode implementation.
I forgot to mention the deck-sharing point:
It's pretty important to learn the material yourself first, then use the cards as a reminder of what you learned. When you go through someone else's deck, there's no context.
But I could see how, if you already knew a language, then going through someone else's quiz questions might keep you on your toes more than your own quiz questions.
Here's a link to a blog post by him describing the technique for programming:
* the 75 most common words make up 40% of occurrences
* the 200 most common words make up 50% of occurrences
* the 524 most common words make up 60% of occurrences
* the 1257 most common words make up 70% of occurrences
* the 2925 most common words make up 80% of occurrences
This kind of analysis would also point at important targets for optimisation, simplification and parallelisation as well as direct the efforts of library implementers for new languages.
IMHO, if you want to maintain fluency in a programming language you're not using, say to keep your knowledge of C and C++ fresh while you're working in higher level language, you should keep contributing to a bunch of open-source projects written in that language, same as for natural languages you need to keep conversing with people that speak it or at least watching movies in it. Otherwise you'll end up knowing and artificial subset of that language that's useless for real work... And the plus is that having the os projects contributions in your resume will keep you employable for future work in that language.
Unlike a human natural language, there is little value in trying to memorise extreme details without the trial of placing them in an application context. As for new concepts, the value flows from the mere effort of initial understanding and any consequent usage. Meanwhile, the core of a programming language is almost always easily learned just by using it directly.
Of course, an SRS usage as described could be very useful for other contexts, e.g. remembering programming code snippets or concepts for the purpose of interviews.
It's one of the coolest things I've seen in years!!!
a = 5 + '5'
In Python, this fails to typecheck.
(I used an interpreter for each of these languages to verify this)
Otherwise, the 'factoids' I have 'memorized' are all out of necessity and come from repeated usage. I don't consciously memorize anything.