In doing so, it is simply overloading the student with syntax memorization and conceptual overload. It bothers me so much that very few books (Kernighan) talk about WHY. WHY. WHY is a variable needed? WHY is a function needed? WHY do we use OOP? Every single book out there jumps straight into explaining objects, how to create them, constructors, blah blah blah. No one fricking talks about what's the point of all this?
Teaching syntax is like muscle memory for learning Guitar. It is trivial and simply takes time. Syntax - everyone can learn and it is only one part of learning how to code. Concepts are explained on their own without building upon it. The famous book for Python (Learn Python the Hard Way) explains Loops in its own chapter and provides examples. But, they never build up on the idea. There should never be a separate chapter for variables, loops, functions, etc. Chapters should be:
Chapter 1. Setting up the problem (Goals)
Chapter 2. Defining Inputs/Ouputs (API)
Chapter 3. Automating something (Variables, loops)
Chapter 4. Abstraction of something (Functions)
Chapter 5. More automation! (Combing all)
Chapter 6. Splitting code into multiple modules (Growing the project)
Chapter 7. Objects (New type of abstraction, OOP)
Chapter 8. Reusability of Classes (Inheritance)
Chapter 9. Safety/Security (Encapsulation, tie it back to Chapter 2.)
NAND to Tetris 
Handmade Hero 
The Nature of Code 
Harvard CS50 
How to Design Programs  (thx minikomi)
It uses Karel the Robot, which is a kind of pseudo-code robot written in a Java environment. So you’re trying to use code to solve problems, and use code composition techniques to make it happen.
For example, Karel can’t turn right, so you need turnLeft(); three times. But that’s a pain, so create a turnRight() method that tells Karel to turn left three times. And just like that you understand function composition and why you need functions.
Similar problems force you to use every low-level aspect of programming, including loops, variables, solving off by one errors, etc. And by the time you’re done you’re actually using classes and kind of accidentally writing Java, without fully understanding that’s what’s happening.
I consider it a work of art, and which someone would make an open source, extensible Karel environment that’s really well done and supports other languages. In fact I may just build it.
It's practically right there in the title. I think this will actually evaluate true in every single programming language as well, even those yet to be written:
char title = "Teach Yourself C in 24 Hours";
char meaning = "Cynical money-grab from people that haven't learned there is no shortcut to learning";
strcmp(title,meaning) == 0;
Edit 2: The most useful courses I ever took, that taught me the mental framework for approaching code or any code-like problem (and really any problem that can be chunked down to discrete bits of work) are the following, roughly in order of importance:
2: Data Structures
CS50 is great for core principles, but you need to go into that course with some kind of knowledge/experience working with a programming language of some kind. I found it just kind of dropped you into your first algorithm problem without teaching anything practical about implementation except for how to compile a C program. Regardless, it's a great course.
I'll definitely be having a look at the others you/others listed. I like to run through these things to fill out my core understanding or to refresh it. And still need to finish NAND To Tetris when I free up more time I can commit to it.
CS50 - Just watch week 0 and it literally starts with a survey of 73% that have never taken anything computer programming course or know anything about programming. The course starts with the absolute fundamentals - decimal system to binary, power cord of the computer where electrons go in and then builds slowly into introducing C.
It’s funny, when I first started programming, I thought “why do all these resources assume I know so much?!”
now I find myself thinking, “why are they assuming I know so little.”
From the wikipedia page: HtDP was designed as a textbook to address certain issues that some students and teachers had with SICP.
I found the Harvard CS50 really, flashy and fluff. Didn't enjoy it. I took an alternate to it, Introduction to Computer Science using Python on Edx.
I have a boxed set sitting on my shelf, I'm told it is worth reading but I can't seem to find the time and motivation.
I've never taken linear algebra so that is what is holding me back right now from continuing concrete mathematics. I bought Gilbert Strangs Linear Algebra book and it seems good so far, but I am busy with schoolwork now so I'll have to get back to it after I graduate in a spring.
As for this particular link, all image links in the text seem to be broken, and a pdf of the book can be found elsewhere.
The book I learned from was C in 12 lessons by Greg Perry, which I found is still on the shelves to this day, 25 years later. Terrible book.
The bad part is you don’t learn how to do anything other than little bit of simple input and output and loops, and the worst explanation of pointers you can think of.
You can’t do anything in the base language, but that’s just the way C works, you need a platform specific book to learn to do anything worthwhile. This is why the 80’s assembler books were so useful and inspiring.
For what you end up learning in the book you could have actually spent the 24 hours (or less) using something like Python, where you’d be way more productive and hundred times less confused. Actually doing work instead of messing with pointers and fscanf().
Thank you for your response. At this point, I think that I must say that your brain and my brain must operate very differently. To me, messing with pointers is the beauty of all that - and it so easily translates to the assembler-lang-brain.
EDIT: Did you not find the K&R explanation of pointers sufficient?
Today, a generic book on <programming language> will have to spend a significant portion of time explaining platform specifics. Granted, this is really only three platforms these days (Windows, Mac and Linux) but they are significantly different to be a pain to have to explain in a general purpose book.
If you don't plan to learn C++ later, okay, do use a modern version, but if this is a possibility, better start either in completely different language with more clear semantics (pascal was great, maybe some Go or Rust are good now), or the pure subset, which wouldn't add to later confusion but would allow to use more advanced abstractions as needed.
From my experience teaching software engineering to high school students which already passes their algorithms & programming courses, small differences in practices and/or semantics (svn commit vs git commit) is what kills learning progress.
I am glad the C11 committee put more weight on implementibility for C++ compilers than C99 did. Variable-length arrays in particular feel like the C99 committee decided to poke the C++ committee in the eye with a stick. (Given how the C++ world treated C at the time, though, I understand the temptation.) That said, if being compilable as C++ were a hard requirement for C features the world would be a poorer place.
Meaning VLAs that can blow the stack and memory related functions that still track pointer and size in disjoint variables.
Of course, after some time I thought of googling "best book on c lang", and then I discovered K&R.
Anyone capable of thinking that thought (about any programming language) is so divorced from the vast range of pragmatic contexts within which programming happens, they might as well be shipped off to a monastery to live out their days.
My point wasn't to get pedantic nor to criticize this particular sentence alone. I'm essentially saying that beginners' material could often be as easily comprehensible without the sort of "simplification" that actually says something wrong.
Even if you only consider compilers and interpreters (in real life there are many in-betweens: bytecode/VW, JIT, etc.), a language is not either interpreted or compiled. Maybe its reference/canonical implementation is an interpreter or a compiler, but that's all.
What would stop anyone from writing a C interpreter? Or a Python compiler?
Now consider OCaml, which canonical distribution comes with the `ocaml` interpreter, and the `ocamlc` and `ocamlopt` compilers: how do you classify the language?
Here's what Shutt has to say about the difference between interpreted and compiled languages: https://fexpr.blogspot.co.uk/2016/08/interpreted-programming...
tl;dr: The choice of compiled vs interpreted is made early on and greatly affects the design of the language semantics early on.
Moreover, the distinction that Shutt makes in this blogpost between compiled and interpreted is not very clear: most of what he says rings more to me as a distinction between static and dynamic rather than between compiled and interpreted, but I concede that I read the blogpost rather quickly and without taking the time to fully grasp everything.
Edit: Just adding that it helps to think of some interpreters as faster than others, and it may pay to translate code from one interpreter to another.
This is an introduction, details will be glossed over.
> This is an introduction, details will be glossed over.
Maybe, but I really don't like this. Sometimes it is necessary to simplify things (at least in the first place) to teach them, but it is very rarely the case that it is necessary to say something actually wrong. For example I think it is totally acceptable to ignore the existence of VM and JIT in this chapter, because explaining what they are requires knowledge that the targeted readers don't yet have.
But I don't see what good it does to say that the two types of languages are compiled languages and interpreted languages. It may even be counterproductive because the same paragraph which explains what a compiled language is as opposed to an interpreted one ends with this sentences: “You can think of the C language as a compiled language because most C language vendors make only C compilers to support programs written in C.”. This sentence is now confusing for the inexperienced reader, while it is the only acceptable one in the paragraph.
It would not have been more complicated to explain that languages can be implemented either with a compiler or an interpreter, then explain what is the difference between the two (this is already done), and then finish with that same sentence :).
"works for me"
I think it's a useful simplification to say something like "C can be interpreted, but it's almost always compiled in practice, so we usually say that it's a compiled language". Even with that, you've got to have an explanation of "compilers" vs "interpreters", with the attendant decisions involving the balance of simplification vs pedantry that it's so easy to get mired in (this thread being an example).
A document claiming "Learn [topic] in [short amount of time]" seems likely to err on the side of simplification (the alternative being perceived by the author as over-explanation). In that position, my solution might've been to avoid the compilation vs interpretation discussion, describe a compiler as a program for converting source code into a computer-friendly format, add a footnote to something in the appendix with a deeper explanation of other methods of preparing source code to run as a program, and move on.
But you can listen to a Queen's best of album at any age.
Please don't comment about the voting on comments. It never does any good, and it makes boring reading.