When learning code out of a book for a new language, one trick I found that worked really well was to read the code, then close the book and try to type as much of it from memory as I could. I would futz with it for several minutes exploring various ways of making it break, seeing if I could 1) predict how I was breaking it, and 2) use the error messages to fix my mistakes.
Then I would open up the book again and compare it to what I had typed, examine the differences between them, and see if/how they explained the errors I was getting.
I'm a high school computer science teacher, and this is a WONDERFUL idea. I'm totally going to steal it and create a few assignments based on the concept.
Please do teach them the value of data representation. I don't think this is obvious but it can make the difference between having to write a mountain of code or finding a simple solution.
Super-simple silly example: You have to write a program to work with four colors: Red, Green, Blue, Black. Naive representation will use strings: "red", "green", "blue", "black". A little smarter would be an enum type where red=0, green=1, blue=2, black=4. And, depending on context, it might even make sense to use individual bits to identify each color: red=01h, green=02h, blue=04h, black=08h. And, if the goal is to extend to other colors, it might make sense to represent them as RGB vectors.
You get the idea. I find that a lot of newbie programmers are too focused on the mechanics of writing code. They forget (or don't know) that investing some time in optimizing the representation of the data or problem they are trying to solve could make a world of difference.
I think it is important to learn enum's but it is a bad example. You really should use the built-in types when they're available. Color is a particular example (in both .Net and Java) where you should never re-invent the wheel.
In general teaching enum's is very important and something I've not really seen schools do (although their "learn to program" programs are typically 101 to newbie, I've never seen a program that really builds on the basics).
While I generally agree with what you are saying, it is very important to understand that built-in types are not magical. This is where coming from a low-level embedded C or assembly background can be very useful.
As an example, there are a number of sites showing the performance hit you take if you use some of the NS types in Objective-C. If I remember correctly, in some cases you are talking about NS types running 400 times slower than alternative code.
In my mind, the question of data representation could also include this very choice: Do I use an NSArray or do I do it "by hand", allocate memory and use a "simple" array?
OK, I pulled colors as an example out of thin air but I'll see if I can make it work.
Let's say you have to write a routine that does something based on a color as the input. You have a few choices in terms of how to represent the colors:
- The name of the color in a string
- A typdef enum for your colors (integers)
- A color-per-bit scheme (U8, U16, U32)
- Channel-per-bit scheme (U8)
- 4 or 6 bit packed RGB values (U16 or U32)
- 8 bit packed RGB values (U32)
- 8 bit unpacked RGB values (struct of three U8)
- 16 or 32 bit unpacked RGB values (struct of three U16 or U32)
- Unpacked RGB floats (struct of three floats)
- and more...
I won't go into the implications of each of the above. Some of it is highly dependent on both the system and the objectives of the work being done.
Say, for example, that you choose to use the names of colors stored in strings as your color representation. Now you have to compare strings in order to identify the colors:
if(strcmp(input_color, "red") == 0)
{
// Do something with red
}
else if (strcmp(input_color, "green") == 0)
{
// Do something with green
}
else if (strcmp(input_color, "blue") == 0)
{
// Do something with blue
}
... etc
Regardless of language the strings need to be compared character by character. Even if a language or OO framework allows you to say something like if(string1 == string2) you have to keep in mind that what is going on behind the scenes is pretty much exactly what strcmp() has to do. Which means that the above is, at the very least, slow.
And, of course, it isn't very portable. What happens if the input has to be in German or Japanese?
The typdef enum representation gives you the ability to use a far more efficient construct to identify your colors:
switch(color)
{
case COLOR_RED:
// Do something with red
break;
case COLOR_GREEN:
// Do something with green
break;
case COLOR_BLUE:
// Do something with blue
break;
... etc
This is much, much faster. It is, at the core, an if/else-if structure that is only comparing integers, which is a single machine language instruction. Fast and clean and language-portable by means of the proper text-to-integer function somewhere to deal with different languages.
If you are on an embedded system that can do bit testing in machine language it might make sense to encode one color per bit or one color channel per bit. for example, in some embedded C dialects you might be able to do something like this:
if(color.0) // Select and test bit 0
{
// This is red
}
else if(color.1) // Select and test bit 1
{
// This is green
}
... etc
At this level the advantages of doing this are tightly linked to the platform and the goals of the application.
If, for example, one needs to be able to expand the available range of color inputs beyond what can be described with simple words a discrete RGB representation might be the best choice. This is also the case if you wanted to future-proof the program and be ready for when more colors arrive.
Here you have several choices, two of which are to represent each channel with an 8 bit value or choose floats instead.
The 8 bit values can be packed nicely into a U32, making it very efficient.
You could also create a struct to facilitate access to the components and let the compiler optimize for you.
The float example is interesting because the conversion from float to whatever (if necessary) can be of any bit width. So, for example, if the color needs to ultimately be mapped to an 8-bit-per-channel display device you can translate from float to 8 bits on output. All of your intermediate math and color manipulation would be done in full-resolution floats which means that you are not going to accumulate errors. This, for example, is important if you are applying FIR filters to calculate missing color sample data from certain video data formats.
Packing has its issues as well. If you are dealing with little-endian vs. big-endian systems there might be overhead associated with unpacking and possibly rearranging a packed RGB value. If you are dealing with processing colors at a massive scale this can have performance and even power consumption implications.
I may have been lucky in that my very first CS professor was hell-bent to teach the importance of thinking deeply about data representation BEFORE thinking about code. He'd repeat this mantra 'till you were sick from hearing it. Years later I'd learn to appreciate this bit of wisdom in more ways than one.
I think it depends on a lot more of the circumstances than this. For example, in many languages, you can intern the strings so that a string equality test is just a single machine instruction. And if these color representations are crossing some interface that needs to be kept stable, it's a lot easier to add new colors if what's crossing is "red" or "#ff0000" than if it's "2". And it may be that what you're doing with the colors is just generating HTML, rather than doing multi-way switches, in which case the enum implementation has no advantage over the string representation; it just increases code duplication.
The probably more important consideration is that with an enum, your compiler can catch misspellings. Depending on your runtime environment, this can be a huge killer advantage. In particular, if your runtime environment can't do much beyond blink an LED to report errors, compile-time checking is really really important.
I think we are slicing this a little too thin. My original point is that it is important to understand that the choices made when representing data can be important. My off-the-hip example was not meant to be definitive.
Color.Bleu will give a compiler error. "Bleu" will not. And is "White" a valid color? No idea, need to check the docs. Try Color.White and the compiler or editor will tell you.
Another +1. When I've used this method I've noticed that the tendency at first is to try to just remember the code and copy it out of your memory. There comes a point, though, when you don't remember it well enough to "copy from memory" and this naturally segues into a focus on the logic and syntax of the code so you end up reconstructing working code without fully remembering the original.
One thing that has really sped up my my learning over the past few months on Code Academy -- complete a lesson, then in another tab open their scratch pad feature and try to retype the code not only from memory, but also while thinking through the logic.
Has really been effective for me as I feel like I now have a pretty good (beginner) grasp on JS and Python
Interesting concept. Especially if you consider that better memory for plausible situations is one of the differentiators between experts and beginners. Be it in programming or chess. Because experts have developed mental shorthands for common situations.
This is exactly how I handle learning languages/frameworks. I wrote a little something on it as well [0]. I value thorough comprehension over "how fast can I finish this book?". I try to get my hands as dirty as possible. I want to see the warts of this thing I plan on using to build projects. Trying to rebuild book examples from memory forces you to consider the requirements and the end goal. This was particularly useful when I first learned about linked lists and queues in C.
Reimplementing standard library functions is great for Haskell too. The biggest of the "big breakthrough" experiences I had was from trying to reimplement the standard monadic State datatype.
I agree with this especially when you are doing tutorials for this reason: when you type out code, you inevitably make typos that cause bugs. This forces you to learn what some of the common possible error conditions are and forces you to learn to find and resolve them. Learning these things when you have the safety net of a tutorial is ideal, because if you can't figure it out you can always compare with the tutorial's code and learn what your mistake was, and you still get the benefit of learning about the particular error you experienced.
Can anyone else relate to this? Back in the day (early 90s), I would go down to the public library, pick out a thick book full of computer games written qBASIC, and of course I had to type them in all by hand. We didn't have internet yet, the idea of copy-and-paste from somewhere did not even exist in my mind. So I would copy these 10,000-line BASIC programs in by hand into our 386 computer, one line at a time.
For some reason, probably because I was a tiny kid learning a new thing, this tedious task was so much fun. And that's how I learned my first programming language. Never opened a single tut or instructional book.
Similar story here, but in the late 90s. I used to print out TI-82 calculator apps/games and type them in by hand because my parents wouldn't buy the $60 connector cable. It was surprisingly fun. All my initial programming knowledge came from debugging the typos I had made.
This is pretty much how the book I'm using on Pygame is working. It's a load of games written in Pygame that the author encourages you to copy out and try to learn how it works yourself. Then afterwards he explains in detail how it all works.
The writing it really helps with learning how you can use the simple stuff together to make more complex stuff.
I was surprised at how effective it was when I started learning c# by forcing myself to type all of the exercises. What I was also pleased with was that it really helped me quickly recognize and remember patterns(code shapes), even before I understood how they worked. Plus I could leave comments for myself to come back and learn more in the difficult spots. Highly recommended.
My very first programming class was Java and I had no experience prior.
I felt so alive writing every character and every line. I was too excited to read the paragraphs of description in the textbook because the code told most of the story anyway. I could look at it, read it and mostly understand it, then retype each character at a time and watch it work. Once it worked, I varied small things. Anything I could improve. Sometimes the goal was to make it as condensed as possible, sometimes it was to try to make it simpler to read, or to adapt a method for ints to also work on floats, or to add parameters making the code even more flexible. Kid in a candy store mentality fits perfect.
I did this for almost every example in the book, even ones we didn't cover in class, a few hundred in total.
Copying and pasting is fine if you're copying code you've written yourself. However, as you said, it's a lot more beneficial when you type out code because you'll make mistakes and have to figure them out.
It's very hard to learn how the code you're using works when all you've done is copy and paste it.
As someone who learned to code recently (last 3 years), this advice is good but dangerous in the long-term.
I started by typing out other people's code from various books or tutorials, but found that it produced a feeling of "success" that made me lazy in taking the step to producing my own code from scratch. I had that feeling of progress, without actually making real progress.
In my experience the best way to learn to code is to pick a small-ish project that you personally want to use and just get started. You'll screw up a bunch of times, but it's the only true way to learn coding.
Neither way is "the best way to learn code"; both are good ways, and they each guard against the other's harmful influence. Do both: when you want to copy code, retype it thoughtfully; also start a small project to do for yourself and jump in.
I disagree. I have always thought that one of the keys to being a good programmer is reading a lot of other peoples' code, and it's a process that doesn't stop just because you're more experienced. I've been programming for maybe 15 years now, and one of the ways I learn how for example a new algorithm works is still to type in an example and run it.
Integrating the manual activity of typing with the mental activity of thinking about what you're typing in and the visual activity of reading someone else's code just stimulates a lot of different neurons and helps you learn.
If I may add to this, I found it really helpful to comment out every line as I went. Even today, when I want to learn new languages or frameworks I pick a repo on github and copy the whole thing line for line, commenting as I go. I found that this helps a lot.
I work this way too, and for the last several years I have been looking for software that would let me mark up some code without actually changing the underlying source files (keeping the annotations on the side). Putting your own comments in screws up diffs and all sorts of things like that.
you could try stripping comments before diff'ing. If you have single line comments you can use diff -I otherwise try this: http://freecode.com/projects/stripcmt
People have been telling me this when I express an interest in the interwebs, but as someone who only has a loose understanding of what a small project is, how can I pick such a thing?
> This website uses CloudFlare in order to help keep it online when the server is down by serving cached copies of pages when they are unavailable. Unfortunately, a cached copy of the page you requested is not available, but you may be able to reach other cached pages on the site.
I'm probably being significantly thick-headed, but what exactly is Cloudflare even doing if it's not properly caching pages before traffic hits? And why would they want to advertise that fact on the error page?
If the site goes down before anyone even hits it, I imagine that CloudFlare can't exactly get a copy. Given that CloudFlare is a free service, I suspect that it doesn't automatically cache every page, either- it might wait until a page gets to a certain level of traffic before doing so. I have no idea, though.
Don't have time today to provide citation but I believe the act of writing and transcribing involves a different part of the brain than reading...this is my layman's theory as to why writing and reading is so much more stimulating than just reading, even when you can easily comprehend the code on sight
it's odd but in my experience I have found it to be the opposite. Perhaps because I've grown up with keyboards I find typing something out to be quite stimulating. When writing things down I find there is a huge disconnect between the funny scribbilous motions of my wrist and the meaning of the words they produce. Although perhaps that's a left-handed thing.
In the old days of tape drives (at best) hooked up to our TRS80s, Commodores, and the like. The only way to run programs in popular magazines at the time was to type them in. If you didn't have storage and wanted to try the program again, you typed it in again.
It was a pain, but certainly learned things (like how poor / slow a typist you might be).
This slow process allowed you time to see and think about what was going on.
I also did that but learned a great deal less than I could have. The already hard-to-read code of the machine's BASIC implementations (two-letter variable names, tops) was further mangled by being compressed to fit into the magazine, to say nothing of the programs that were basically just machine code written in hex.
I picked up a few ideas, but you didn't really get the sense of what the program was doing in most cases.
My school library used to have books full of games, text art, and programs written in BASIC that I used to type in line by line to our C64. I wish I could say I learned a lot about computer programming (I didn't) but the biggest thing I do remember was being thrilled at being able to make typed instructions turn into things happening on the screen.
These programs would get run only a handful of times, since I had no way (or it never occured to me) to save them for later use.
It is essentially what Zed Shaw propose in his "Learn Code The Hard Way" books. It definitely helps to type in everything and can be applied in almost anything we need to learn.
It's this memorization trick teachers usually told us to do, basically rewrite the notes you've taken or things you need to remember. It's somehow boring to apply and while doing it you don't necessarily realize you are learning but it does work great.
Then it's extended with programming where what you typed is used interpreted by your computer and turns results from what was typed.
When I was in school, I found that the most important part of the notetaking process was manually writing the notes during lectures. Reading them afterwards rarely helped me remember or learn more. Reading notes written by someone else was next to useless.
I used to write notes and then throw them straight in the bin. My handwriting was effectively illegible and if I needed to know something, I'd look it up again and write new notes... and throw those away too.
Cognitively speaking handwriting and typing are different tasks and thus have different results on the learning process, which may or may not vary individually.
I think the best method to learn coding highly depends on the learning style of the person.
Here's another trick that's helped me a lot over the years:
open up your favorite library on Github, go back to the very first commit and start reading, commit by commit. If you don't understand something, just copy the difficult section, line by line, on a repl, inspecting variables and/or function results.
That may be an effective way to learn (ie. it works), but I have my doubts that it's a very efficient one (ie. you could learn more in the time you spend retyping that code) or a very entertaining one (ie. other ways of learning to code are more interesting).
I've found that it's easy for me to space out and just go through the motions if I'm simply reading or copying and pasting. Typing the code manually (or writing full notes in a university course, et cetera) forces me to focus enough that it actually drastically increases my efficiency.
So in my case, it's more work to type out code than to copy and paste, but it saves work in the long run.
You could create more, but whether you'd learn more is another matter. I know that whenever I've gone down the "copy a bit of code and modify it" route it's never sunk in as well as manually retyping it all.
A corollary, a comment on method, and an anecdote:
Related, this approach also reveals syntactic errors in printed books. This re-enforces the principle, since it shows that even published authors make mistakes. It also means that, even following the text verbatim, I have to account for code errors and the standard bugs (as others note here).
Somewhat related to other comments here regarding closing the book and trying to recreate the code or underlying logic, I often think of REPL iterations in relation to industrial-scale engineering (rocket launches, bridge-building, etc.). If the cost of the test-iterate cycle is not seconds or minutes, but months and millions of dollars, we'd think very carefully about the logic and edge-cases. Granted, I like that fast iteration is possible, but in cases where that's not an option, you'd just have to use other approaches. Mostly, that would mean truly, deeply understanding what you're asking the machine to do.
Despite being a fan of Gonzo Journalism, I've never heard the anecdote about transcribing entire Hemingway novels. Re-typing code examples verbatim seems sort of obvious, but re-typing narrative fiction, not so much. So it's sort fascinating to hear that anyone in fact did that (and for exactly the same reason).
Iv'e been researching these kind of educational and cognitive topics for some time and it's quite obvious that writing code and writing fiction or non-fiction narratives are cognitively different tasks. However, further scientific research is necessary to obtain evidence whether or not retyping narrative (or even code, I might argue) is a good method.
This is how I learned to code. I sat down with the Turbo C book and typed out the examples. Later on (early-mid 90s) I would download code from CompuServe. Instead of using it wholesale I would try to recreate it using my own style and naming convention.
It lead to a lot of frustrating moments though. I remember sitting there and fuming trying to figure out where a variable came from. (This was when C++ was starting to get popular and variables started popping up everywhere. I was trying to figure out where in the world a counter was being declared in a for loop. Turns out it was being declared in the for loop.)
A bonus of this process is that I made a TON of mistakes and was exposed to dozens of programming styles. Because of this I can just skim code and error messages and have a general idea of what to look for.
I learned to code essentially from books. I tried the type it out method. It was... um... a dismal failure. I would type it in, and I would have no idea what happened. It was all in code. There was magic in them thar characters, and the magic wasn't working.
So I regrouped and started writing my own code. Lots of code. Very bad C++ code. Excruciatingly bad code. Over and over again. I desperately wanted to understand, and I studied the books time and time and time again, taking those concepts into my very bad code, and after time, my code became less very bad and just... bad. After about two years of this, I was barely marginally competent, but I did pretty much understand how to write code. Then I went off to college and learned computer science.
So a long time ago there were two C compilers on the Amiga, one (Lattice C) was the "official" C compiler and it was very slow, the other "Redhat C" was really quite fast. The nice thing about the fast C compiler was you could compile and know right away where the syntax errors were, the slow compiler ground along forever.
A side effect of this was that before kicking off the slow compiler I would do a careful read of my code to be sure I hadn't done anything stupid. Often finding other bugs along the way before I finally kicked off a compile. Whereas programming on the fast compiler was more iterative and it made me a rather lazy since I could just compile/edit my way to a clean build without thinking too hard about the code.
Someone - was it Kernighan? Ritchie? - wrote a nice description of this. In short, when you find the source a bug, don't just fix it - stop and think what caused you to make that error, and think about what else in the code might be a victim of the same line of reasoning (note: not necessarily the same code - just the same design decisions).
Back in the day when you had to submit your code to the computers overnight and wait until the next day to find out if you got your output or just a one-line compiler error, you were much more careful to avoid errors, big or small.
Having a longer iterative cycle can be good training.
Even more important is to go over every line and don't go on to the next line until you are sure you understand. Just like you can read without understanding you can type without understanding as well. Forcing yourself to understand will make you a much better programmer.
Well, in one sense, I agree. Make sure you know what each line does on its own. But sometimes a single line doesn't make sense, in a larger sense, without something that comes later. So you have to have some tolerance for not understanding each line before you move to the next. It could be put in terms of knowing the "what" vs letting the "why" be uncertain a while.
This is very much in-line with the way the Learn to Code The Hard Way books are written. There's even a call to action at the beginning of each book imploring the reader to avoid copy-paste:
For roughly the first year when beginning to learn to code, I refused to use an IDE or any editor that had intellisense. The result was having to remember the syntax of different languages, method names, etc.
I was in college at this point, so it made spotting syntax errors easy for multiple choice questions or debugging on tests.
I also approached coding problems on paper before typing a line of code. This helped to grow how I approached problem solving, rather than whether the page would run properly or not.
If I were to start fresh I'd still take the same approach. I'm a huge believer in repetition for remembering, and writing out code rather than copy/pasting or have it be autocompleted, eventually pays off.
"During this time he worked briefly for Time, as a copy boy for $51 a week. While working, he used a typewriter to copy F. Scott Fitzgerald's The Great Gatsby and Ernest Hemingway's A Farewell to Arms in order to learn about the writing styles of the authors."
Tommy, I saw the domain name of this post and went to the main site. My suspicion was correct, you're from Richmond. I am too and went to Virginia. Awesome to see a Richmonder on the front-page of HN!
I agree with this a lot. Just the act of typing it out is better (even if just marginally) than straight up copying and pasting especially early on in your career. There are some pretty complex tutorials for Java EE that rely on copy/paste and even with the context it's hard to absorb a lot of it.
Basically how I learned to code as a kid: type out the programs from Byte/Compute magazines. I eventually found books on BASIC in the library and typed out the examples from those. Then I'd modify them to add my name in there or make the dot jump a little higher. Eventually I was writing my own programs.
I assume it's some sort of "kinesis" style loop in the brain that is at work when you do this. "Hands-on," learning is a very useful tool.
"The right approach is to get your hands dirty, get inside the core of the code and understand it, thus implement it well. We should also try to remember the code as much as we can at the first place so if we face a similar problem later, we can solve it in no time. Also I would suggest not to copy paste it and try to type it on your own (if the code consists of a few lines), this approach will definitely help you to remember it for later use. The coding ninjas, the coding beasts, the rock start programmers, whatever you call them, all the great programmers definitely have one thing in common and that is they are like living library of the programming language and framework they use. They spend maximum time in getting things done (not finding the solutions to the problems they have already worked on before)."
So one thing that I want to point out, many, many tutorials and books for coding teach bad design practices or provide code that is overly verbose, to make instruction in a particular element clearer. Java is particularly troubled by this imho. So if you are going to use code you find, you should look at good code. I recommend code from a mature open source product.
This is one reason why I skipped Java and went right to learning Python and Ruby (after background in C). As a beginner, there was no context, no way for my n00b self getting into OO programming to sort the good Java from the cruft.
In some ways I disagree with this: I think that it depends a lot on the type of person writing the code. In the past I have found when I copy the code it is essentially a copy-paste level of thinking (i.e. none at all) that takes longer. There are times when I feel that when I am typing out example code I pay less attention than when I copy paste. When I copy/paste, I at least usually try to look for the important parts of what is going on, but when I am typing it out, I am too focused on making sure I have a comma instead of a period, or matching parentheses, etc. And not to mention I tend to leave out comments that probably should be there in the interest of being sick of typing.
To be fair the case I am imagining myself in is the times I have looked up something relatively simple like opening a socket or something pretty mundane, so this may not apply to all situations.
Not sure about just typing, but working thru things for yourself helps, even if it a repetition.
This is how I taught myself mathematical problem solving (and math as a side effect). I would read the problem statement, instead of looking at the solution, try to solve it for upto 15 minutes or sometimes even for several days depending on how much value I am planning to get from the solution.
The Brachistochrone problem(http://en.wikipedia.org/wiki/Brachistochrone_curve) is an example of a problem that I tried to solve for several days. Newton solved it in 3 hours, and when you learn math like that you don't take things for granted. You can admire the amount of insight that went into the steps. You can also internalize the knowledge obtained thus exponentially more effectively than you would by listening(and falling asleep).
I've heard this story many times over the years and was always found it amusing but was skeptical that it did HST any good. I mean, what use is it to blindly just copy something letter by letter?
I did Nanowrimo on a whim this year and it was a lot of fun (I doubt I'm actually any good at it, though). In doing it I found myself going to Gatsby and a Hemingway book (The Sun Also Rises) for examples of clear, direct prose. I thought back to this Thompson anecdote and thought to myself that if I ever wanted to really pursue writing, a straight copy of one of those books would help my prose ten fold.
So I agree with this article. Type out the code you're borrowing from, don't copy and paste. Also, tchlock23 is right in that you also need to start some small projects from scratch and suffer through them with minimal help.
The book, "Day," by Kenneth Goldsmith was created by typing out, word for word, an issue of the New York Times. It's considered a book of poetry.
Typing out a novel word-for-word doesn't seem all that insane to me. And it likely isn't a useless exercise either: artists learning to draw and paint have often tried to reproduce the works of the masters in painstaking detail in order to try and internalize the techniques and effects used. I suspect the mechanical motions of typing out a piece of writing you admire will illicit much the same response in your brain as it likely shares the same mechanism of reinforcement.
Might also be worth noting that some of William Burroughs' writing (as he self-describes) were created by cutting a type-written page into quarters, rearranging various quarters, and then dealing with whatever came of that (i'm of course paraphrasing, from memory of reading his works years ago, so forgive any vagaries).
I too have been doing this with Zed Shaw's work. My biggest piece of advice is this: make sure you read the code and try to understand it before you type it in. When you are given
char *names[] = {"Bob", "Dole", "Bananas"};
don't just write out the text as you see it (which is what I did too often). Think through and say "ok, I am creating a variable called names. There are brackets, so it must be an array, and there is an asterisk after char, so it must be an array of pointers to char. Then I construct it with string literals..."
I caught myself writing whole source files from a book without understanding a single line. I agree with danielweber when he says to try and reproduce the code before typing it out verbatim.
I don't have the same experience as most people here. I usually just study by reading or trying to recreate the scene/code/history in my read, and works greatly to understand and memorize. When i write it takes time and feels like wasted time.
It really does help - I had done this in the past, and I need to start doing it again. No matter how experienced you are, I think this would always be helpful as it can open your eyes to new ways of thinking about how to go about solving a problem.
That being said, word of advice: Make sure the code works before you start copying it, nothing is more frustrating then grabbing something OS - typing it by hand, then finding out it doesn't work (Granted, if you learned enough from typing it - hopefully you understand it well enough to fix it but you could have also picked up bad habits if it was broken to begin with.)
After 20 years, I've recently switched from csh to bash (so I can use rvm), and I've been applying this neuro-hack to less-used command. When you copy-and-paste, there's a kind of finger-learning that you miss out on.
Learning to play a guitar, for example, is difficult and yet crazily creative for this reason. It is (almost) completely behind finger-learning and thus in many original forms each with a signature of the individual who created the piece.
The way I learn math and physics is to write out every equation. I can nod at something I don't fully understand and move on, but I have great difficulty writing something I don't understand and moving on.
This is so very true. Copy & Paste never made anyone better at anything (except for muscle memory in your left hand). For a while I typed out every single piece of code I used from the internet and it gave me a deeper understanding of what's going on in this code. I reflect on the code while I'm typing it.
I also do this with math these days. Whenever I'm learning a concept I'm writing out all the definitions and theorems, which gives me time to reflect on them and helps memorization. In general, writing is an excellent way of learning.
Also: use a REPL. I typed all the examples from the Well-Grounded Rubyist into irb, played with the examples to test edge cases, and ended up learning Ruby faster than I thought possible.
Even with a computer science degree I still do this. If I read a tutorial or a book about a new library or language, I type all the examples. By the time I'm done, I know what I'm doing.
That's also how I learn. As I never met anyone who learns that way I always considered this behaviour weird. Nice to know that I am not the only one :).
I went through the entire http://ruby.railstutorial.org/ by Michael Hartl and decided to type every code sample in vim (and in certain cases predict how to create the code before looking). It was a lengthy process, but I definitely got more out of it. Even still I find myself copy pasting code snippets, but after this little reminder I will be typing everything out by hand. It's a great way to really learn a language!
Couldn't agree more. Never download the resources that books put online (code samples and such) - it's tedious writing out a load of code but you learn a hell of a lot faster.
I agree with this article, typing the code out (as long as you are focusing on it and not just typing absentmindedly) really forces you to read all of the code, which should help you understand what is happening. Just as in the article I have also found myself unable to write my own code at times due to working off of other programs or snippets. When I type it out though I gain a much more thorough knowledge of the api or language.
This is a great article and I absolutely agree with the message. I'm learning Java and I use Eclipse as my IDE of choice. The main advantage of using it is how easy it is to access JavaDocs for class, methods, and what not, but I've made a conscious effort not to rely on 'auto-complete' features if I'm implementing something I've never used before. But it is tempting to just use that for everything.
I've found a similar tactic useful for learning APIs: don't reuse example code, and don't reuse other people's code. Doesn't matter how annoying the API is - always write your code by hand. Re-type bits out of the code you were thinking of copying, if needs be! It's much easier to spot things worthy of note as you type them in than it is by reading them.
I totally agree with the title of this post and often times I have pledged to myself to never copy and paste even though I always get the idea first what is this piece of code is doing but still most of time I end up making mistakes with copy + paste.
At the end, the time spent to rectify the errors resulting from copy + paste is much more than I thought I would save.
I started using this method after reading one of Zed Shaws books, and it really works. I repeat an excercise 9-10 times, until its automatic. Its rote, but I'm actually starting to understand concepts. I'm also able to apply what I've learnt when I'm writing my own programs. Its not easy and maybe its not the best way but its working for me.
Just out of curiosity, how many people took their first steps in programming by typing out code from a book or magazine?
- When I started out, including a floppy disk with a book or magazine was a rarity, so BBC Basic, Spectrum and PCW-9512 had to been typed in by hand, with frustrating hours trying to figure out why they didn't work (usually typos)
Now that there's 171 comments on this thread, I hope it won't bother anyone for me to ask that at a few of you help us out by Beta testing a game we're releasing.
letterlasso.com signup with your email and we'll send you a testflight email in a couple weeks. Anyone who happens to see this and is willing to help us, it would really mean a lot!
I'm a CS major and coded all my life. The best way to learn is just to give yourself tasks, like building something specific and then try to make it.
From copy and paste you just learn the syntax, which is useful but not that much. From building something from scratch you learn the principles which is far more important.
Multiple times I've heard the same advise for copy writing. As in, I've heard a few people tell stories of digging out the works of great copy writers and hand-transcribing dozens or even hundreds of them in pen while taking notes along the way. "Here is the call to action. Here is where he's adding urgency."
Recently, I've also started creating notes while studying something I don't understand. Just a small dev_project.txt file, where I type out bullet points about how I think the code is working. If after some time, reading it again it's taking me too long to get something, I just take a look at the notes. :)
I think it is important to have a look at the code and have it imprinted as an abstract algorithm / pseudo code that you can reproduce easily later. It feels such a natural thing to do, I am not sure what kind of people would not do this.
I created http://typing.io primarily to help programmers practice typing, but it also allows users to explore open source code like jQuery and Rails by typing through instead of just reading.
When teaching people to code, I'll dictate for a bit, and then start asking questions as I go, and move them gradually toward building things on their own. Really only a technique that works one-on-one but it's worked very well.
A vital part of that is to ask questions that don't just regurgitate dictation, but to encourage risk taking. The more comfortable a student is with the possibility of being wrong, the more natural experimentation becomes. Always connect it back to progress toward something they are passionate about, whether they are technically inclined from the outset or not. This is why I like to skip "Hello World" type coding and fast-track them to making a tool that they actually want to use in their lives.
This is something I really just learned through experience. I'll frequently just look at the example, type it out, and then immediately start fiddling with it.
I learn coding through doing it myself, not from simply seeing someone else's work.
I try to do this when learning a new language/framework from tutorials and ebooks, but it hadn't occurred to me to do it when using other people's plugins or finding answers to code problems online. I like the idea a lot.
Agreed. But don't just type it out line-by-line though. I find trying to re-implement the codes works better, it would give me a better idea on how something works, and how people usually deal with certain problems.
This is a great point. It's why I always buy paper copies of programming books despite most having free or cheap PDFs online. When I can copy and paste I become tempted to do just that and I never learn anything.
Agreed. A good way to get better at coding is to go completely offline without any books or help whatsoever. Write as much code as you can from memory and try to solve your problem with your own existing knowledge.
This is an old trick. Bach learned how to compose by copying his father's music when he had about 8-10 years old. His father won't let him toying with his compositions so he copied as much as he could.
I totally agree with the central thesis of this article, typing out the code for open source projects I like has been my secret weapon for learning technologies faster than most people think possible.
When I was a kid I typed out every example in a logo book and learned nothing. It doesn't matter if you type it out or not, either way, learning means going line by line and understanding the code.
Can't reach the site, but I completely agree with the title. I'll even retype my own code when refactoring things a lot of the time, just to make sure I understand what I'm slinging around.
I tend to agree because typing it out helps you commit the code to memory. But these days I do cut and paste because I read every single line of code and understand what it does before I do.
I'll add my 2 Cents: Also try to write a program first on a paper using a pencil and then on your Editor. Full blown IDE's are not the best way to get started for relative beginners!
I think CodeSchool.com is great, and well worth the cost. You actually don't type out a tutorial, but rather, have to think about the material covered, and code a similar version.
Kids these days have it too easy with this new-fangled Internet thingie. When I learned to code in QBASIC from dead-tree books, I didn't copy-pasta because I couldn't.
I wholeheartedly agree. I spent my first few years doing this, and it helped expand my framework vocabulary faster than my co-workers at the same level.
it probably helps more to type out the program in a logical order - in the order of execution, like a pre-order traversal of the (module-dependency) tree.
say, in c, you could type the main, and when there is a function call, go there and type it, understand, come back.. that way you get the flow of it and you'll understand the whole code in one go.
i always type the code too for similar reasons, people often asked me why i do that "cause its slow", but i like it. i remember and understand better that way.
When you're re-writing the code from the video to you editor, you are mere copying, you're not inventing, not producing the code. But it is useful nevertheless. Copy-pasting makes you dumber, while re-writing trains you, but very slowly.
Then I would open up the book again and compare it to what I had typed, examine the differences between them, and see if/how they explained the errors I was getting.